2025-07-12 12:58:36.869354 | Job console starting 2025-07-12 12:58:36.889722 | Updating git repos 2025-07-12 12:58:36.976742 | Cloning repos into workspace 2025-07-12 12:58:37.193095 | Restoring repo states 2025-07-12 12:58:37.221269 | Merging changes 2025-07-12 12:58:37.709317 | Checking out repos 2025-07-12 12:58:37.965545 | Preparing playbooks 2025-07-12 12:58:38.716270 | Running Ansible setup 2025-07-12 12:58:43.071668 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-12 12:58:43.817761 | 2025-07-12 12:58:43.817924 | PLAY [Base pre] 2025-07-12 12:58:43.834927 | 2025-07-12 12:58:43.835100 | TASK [Setup log path fact] 2025-07-12 12:58:43.865372 | orchestrator | ok 2025-07-12 12:58:43.885143 | 2025-07-12 12:58:43.885331 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 12:58:43.925531 | orchestrator | ok 2025-07-12 12:58:43.937370 | 2025-07-12 12:58:43.937487 | TASK [emit-job-header : Print job information] 2025-07-12 12:58:43.997500 | # Job Information 2025-07-12 12:58:43.997691 | Ansible Version: 2.16.14 2025-07-12 12:58:43.997726 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-07-12 12:58:43.997759 | Pipeline: label 2025-07-12 12:58:43.997783 | Executor: 521e9411259a 2025-07-12 12:58:43.997804 | Triggered by: https://github.com/osism/testbed/pull/2740 2025-07-12 12:58:43.997826 | Event ID: e7f40800-5f1f-11f0-836d-fb62bbb5ef7a 2025-07-12 12:58:44.004708 | 2025-07-12 12:58:44.004824 | LOOP [emit-job-header : Print node information] 2025-07-12 12:58:44.118573 | orchestrator | ok: 2025-07-12 12:58:44.118826 | orchestrator | # Node Information 2025-07-12 12:58:44.118912 | orchestrator | Inventory Hostname: orchestrator 2025-07-12 12:58:44.118955 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-12 12:58:44.118978 | orchestrator | Username: zuul-testbed05 2025-07-12 12:58:44.118999 | orchestrator | Distro: Debian 12.11 2025-07-12 12:58:44.119022 | orchestrator | Provider: static-testbed 2025-07-12 12:58:44.119043 | orchestrator | Region: 2025-07-12 12:58:44.119065 | orchestrator | Label: testbed-orchestrator 2025-07-12 12:58:44.119085 | orchestrator | Product Name: OpenStack Nova 2025-07-12 12:58:44.119103 | orchestrator | Interface IP: 81.163.193.140 2025-07-12 12:58:44.141232 | 2025-07-12 12:58:44.141362 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-12 12:58:44.601418 | orchestrator -> localhost | changed 2025-07-12 12:58:44.609890 | 2025-07-12 12:58:44.610041 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-12 12:58:45.679925 | orchestrator -> localhost | changed 2025-07-12 12:58:45.694697 | 2025-07-12 12:58:45.694828 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-12 12:58:45.972775 | orchestrator -> localhost | ok 2025-07-12 12:58:45.979702 | 2025-07-12 12:58:45.979813 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-12 12:58:46.008307 | orchestrator | ok 2025-07-12 12:58:46.023975 | orchestrator | included: /var/lib/zuul/builds/c330580256be49afbe62cc1d895a3b2b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-12 12:58:46.031902 | 2025-07-12 12:58:46.032024 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-12 12:58:46.855355 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-12 12:58:46.856138 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c330580256be49afbe62cc1d895a3b2b/work/c330580256be49afbe62cc1d895a3b2b_id_rsa 2025-07-12 12:58:46.856257 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c330580256be49afbe62cc1d895a3b2b/work/c330580256be49afbe62cc1d895a3b2b_id_rsa.pub 2025-07-12 12:58:46.856423 | orchestrator -> localhost | The key fingerprint is: 2025-07-12 12:58:46.856665 | orchestrator -> localhost | SHA256:j4QmFX+DwW+IOfKjB30CrtjcwuInaAapNAul6vcop2I zuul-build-sshkey 2025-07-12 12:58:46.856730 | orchestrator -> localhost | The key's randomart image is: 2025-07-12 12:58:46.856808 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-12 12:58:46.856868 | orchestrator -> localhost | | ... | 2025-07-12 12:58:46.856926 | orchestrator -> localhost | | o.o | 2025-07-12 12:58:46.857001 | orchestrator -> localhost | | .oooo | 2025-07-12 12:58:46.857054 | orchestrator -> localhost | | . o.+...o. | 2025-07-12 12:58:46.857106 | orchestrator -> localhost | | + ..=o.S. | 2025-07-12 12:58:46.857172 | orchestrator -> localhost | |=o oo=..o | 2025-07-12 12:58:46.857225 | orchestrator -> localhost | |B=oo o +. . | 2025-07-12 12:58:46.857276 | orchestrator -> localhost | |BE*++ . | 2025-07-12 12:58:46.857331 | orchestrator -> localhost | |B=Bo.o | 2025-07-12 12:58:46.857386 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-12 12:58:46.857544 | orchestrator -> localhost | ok: Runtime: 0:00:00.388125 2025-07-12 12:58:46.875335 | 2025-07-12 12:58:46.875485 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-12 12:58:46.911139 | orchestrator | ok 2025-07-12 12:58:46.925047 | orchestrator | included: /var/lib/zuul/builds/c330580256be49afbe62cc1d895a3b2b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-12 12:58:46.934710 | 2025-07-12 12:58:46.934812 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-12 12:58:46.957885 | orchestrator | skipping: Conditional result was False 2025-07-12 12:58:46.967712 | 2025-07-12 12:58:46.967841 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-12 12:58:47.530888 | orchestrator | changed 2025-07-12 12:58:47.538123 | 2025-07-12 12:58:47.538235 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-12 12:58:47.821727 | orchestrator | ok 2025-07-12 12:58:47.833233 | 2025-07-12 12:58:47.833352 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-12 12:58:48.674071 | orchestrator | ok 2025-07-12 12:58:48.681512 | 2025-07-12 12:58:48.681612 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-12 12:58:49.114193 | orchestrator | ok 2025-07-12 12:58:49.124028 | 2025-07-12 12:58:49.124152 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-12 12:58:49.147735 | orchestrator | skipping: Conditional result was False 2025-07-12 12:58:49.154195 | 2025-07-12 12:58:49.154290 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-12 12:58:49.541652 | orchestrator -> localhost | changed 2025-07-12 12:58:49.554904 | 2025-07-12 12:58:49.555021 | TASK [add-build-sshkey : Add back temp key] 2025-07-12 12:58:49.840808 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c330580256be49afbe62cc1d895a3b2b/work/c330580256be49afbe62cc1d895a3b2b_id_rsa (zuul-build-sshkey) 2025-07-12 12:58:49.841033 | orchestrator -> localhost | ok: Runtime: 0:00:00.016364 2025-07-12 12:58:49.848099 | 2025-07-12 12:58:49.848189 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-12 12:58:50.231876 | orchestrator | ok 2025-07-12 12:58:50.238125 | 2025-07-12 12:58:50.238239 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-12 12:58:50.272262 | orchestrator | skipping: Conditional result was False 2025-07-12 12:58:50.326379 | 2025-07-12 12:58:50.326514 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-12 12:58:50.740870 | orchestrator | ok 2025-07-12 12:58:50.772636 | 2025-07-12 12:58:50.772841 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-12 12:58:50.823719 | orchestrator | ok 2025-07-12 12:58:50.833578 | 2025-07-12 12:58:50.833741 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-12 12:58:51.142681 | orchestrator -> localhost | ok 2025-07-12 12:58:51.150990 | 2025-07-12 12:58:51.151119 | TASK [validate-host : Collect information about the host] 2025-07-12 12:58:52.330009 | orchestrator | ok 2025-07-12 12:58:52.347305 | 2025-07-12 12:58:52.347439 | TASK [validate-host : Sanitize hostname] 2025-07-12 12:58:52.413106 | orchestrator | ok 2025-07-12 12:58:52.419587 | 2025-07-12 12:58:52.419707 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-12 12:58:53.001477 | orchestrator -> localhost | changed 2025-07-12 12:58:53.008391 | 2025-07-12 12:58:53.008516 | TASK [validate-host : Collect information about zuul worker] 2025-07-12 12:58:53.442634 | orchestrator | ok 2025-07-12 12:58:53.450619 | 2025-07-12 12:58:53.450752 | TASK [validate-host : Write out all zuul information for each host] 2025-07-12 12:58:54.023897 | orchestrator -> localhost | changed 2025-07-12 12:58:54.040787 | 2025-07-12 12:58:54.041100 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-12 12:58:54.330760 | orchestrator | ok 2025-07-12 12:58:54.340921 | 2025-07-12 12:58:54.341112 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-12 12:59:32.947076 | orchestrator | changed: 2025-07-12 12:59:32.947354 | orchestrator | .d..t...... src/ 2025-07-12 12:59:32.947406 | orchestrator | .d..t...... src/github.com/ 2025-07-12 12:59:32.947441 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-12 12:59:32.947471 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-12 12:59:32.947499 | orchestrator | RedHat.yml 2025-07-12 12:59:32.961756 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-12 12:59:32.961775 | orchestrator | RedHat.yml 2025-07-12 12:59:32.961834 | orchestrator | = 2.2.0"... 2025-07-12 12:59:45.032138 | orchestrator | 12:59:45.031 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-07-12 12:59:45.061686 | orchestrator | 12:59:45.061 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-07-12 12:59:46.201460 | orchestrator | 12:59:46.201 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-12 12:59:47.278769 | orchestrator | 12:59:47.278 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 12:59:47.952841 | orchestrator | 12:59:47.952 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-12 12:59:48.554692 | orchestrator | 12:59:48.554 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 12:59:49.909167 | orchestrator | 12:59:49.908 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.0... 2025-07-12 12:59:51.179976 | orchestrator | 12:59:51.179 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.0 (signed, key ID 4F80527A391BEFD2) 2025-07-12 12:59:51.180055 | orchestrator | 12:59:51.179 STDOUT terraform: Providers are signed by their developers. 2025-07-12 12:59:51.180063 | orchestrator | 12:59:51.179 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-12 12:59:51.180068 | orchestrator | 12:59:51.179 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-12 12:59:51.180073 | orchestrator | 12:59:51.179 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-12 12:59:51.180086 | orchestrator | 12:59:51.179 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-12 12:59:51.180092 | orchestrator | 12:59:51.179 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-12 12:59:51.180097 | orchestrator | 12:59:51.179 STDOUT terraform: you run "tofu init" in the future. 2025-07-12 12:59:51.183969 | orchestrator | 12:59:51.183 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-12 12:59:51.183995 | orchestrator | 12:59:51.183 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-12 12:59:51.184001 | orchestrator | 12:59:51.183 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-12 12:59:51.184005 | orchestrator | 12:59:51.183 STDOUT terraform: should now work. 2025-07-12 12:59:51.184010 | orchestrator | 12:59:51.183 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-12 12:59:51.184014 | orchestrator | 12:59:51.183 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-12 12:59:51.184020 | orchestrator | 12:59:51.183 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-12 12:59:51.306590 | orchestrator | 12:59:51.306 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-07-12 12:59:51.306743 | orchestrator | 12:59:51.306 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-12 12:59:51.524864 | orchestrator | 12:59:51.524 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-12 12:59:51.524935 | orchestrator | 12:59:51.524 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-12 12:59:51.524945 | orchestrator | 12:59:51.524 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-12 12:59:51.524950 | orchestrator | 12:59:51.524 STDOUT terraform: for this configuration. 2025-07-12 12:59:51.656430 | orchestrator | 12:59:51.654 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-07-12 12:59:51.656514 | orchestrator | 12:59:51.655 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-12 12:59:51.765278 | orchestrator | 12:59:51.765 STDOUT terraform: ci.auto.tfvars 2025-07-12 12:59:51.771421 | orchestrator | 12:59:51.770 STDOUT terraform: default_custom.tf 2025-07-12 12:59:51.927105 | orchestrator | 12:59:51.926 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-07-12 12:59:52.924129 | orchestrator | 12:59:52.923 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-12 12:59:53.442955 | orchestrator | 12:59:53.442 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-12 12:59:53.698370 | orchestrator | 12:59:53.698 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-12 12:59:53.698500 | orchestrator | 12:59:53.698 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-12 12:59:53.698509 | orchestrator | 12:59:53.698 STDOUT terraform:  + create 2025-07-12 12:59:53.698515 | orchestrator | 12:59:53.698 STDOUT terraform:  <= read (data resources) 2025-07-12 12:59:53.698520 | orchestrator | 12:59:53.698 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-12 12:59:53.704389 | orchestrator | 12:59:53.703 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-12 12:59:53.704418 | orchestrator | 12:59:53.703 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 12:59:53.704424 | orchestrator | 12:59:53.703 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-12 12:59:53.704437 | orchestrator | 12:59:53.703 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 12:59:53.704441 | orchestrator | 12:59:53.704 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 12:59:53.704444 | orchestrator | 12:59:53.704 STDOUT terraform:  + file = (known after apply) 2025-07-12 12:59:53.704448 | orchestrator | 12:59:53.704 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.704452 | orchestrator | 12:59:53.704 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.704471 | orchestrator | 12:59:53.704 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 12:59:53.704475 | orchestrator | 12:59:53.704 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 12:59:53.704485 | orchestrator | 12:59:53.704 STDOUT terraform:  + most_recent = true 2025-07-12 12:59:53.704490 | orchestrator | 12:59:53.704 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:53.704494 | orchestrator | 12:59:53.704 STDOUT terraform:  + protected = (known after apply) 2025-07-12 12:59:53.704497 | orchestrator | 12:59:53.704 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.704502 | orchestrator | 12:59:53.704 STDOUT terraform:  + schema = (known after apply) 2025-07-12 12:59:53.704506 | orchestrator | 12:59:53.704 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 12:59:53.704509 | orchestrator | 12:59:53.704 STDOUT terraform:  + tags = (known after apply) 2025-07-12 12:59:53.704513 | orchestrator | 12:59:53.704 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 12:59:53.704517 | orchestrator | 12:59:53.704 STDOUT terraform:  } 2025-07-12 12:59:53.710079 | orchestrator | 12:59:53.704 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-12 12:59:53.710106 | orchestrator | 12:59:53.704 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 12:59:53.710111 | orchestrator | 12:59:53.704 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-12 12:59:53.710115 | orchestrator | 12:59:53.704 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 12:59:53.710128 | orchestrator | 12:59:53.704 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 12:59:53.710133 | orchestrator | 12:59:53.704 STDOUT terraform:  + file = (known after apply) 2025-07-12 12:59:53.710136 | orchestrator | 12:59:53.704 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710140 | orchestrator | 12:59:53.704 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710144 | orchestrator | 12:59:53.704 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 12:59:53.710147 | orchestrator | 12:59:53.704 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 12:59:53.710156 | orchestrator | 12:59:53.704 STDOUT terraform:  + most_recent = true 2025-07-12 12:59:53.710160 | orchestrator | 12:59:53.704 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:53.710164 | orchestrator | 12:59:53.704 STDOUT terraform:  + protected = (known after apply) 2025-07-12 12:59:53.710168 | orchestrator | 12:59:53.704 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710172 | orchestrator | 12:59:53.704 STDOUT terraform:  + schema = (known after apply) 2025-07-12 12:59:53.710175 | orchestrator | 12:59:53.704 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 12:59:53.710179 | orchestrator | 12:59:53.704 STDOUT terraform:  + tags = (known after apply) 2025-07-12 12:59:53.710183 | orchestrator | 12:59:53.704 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 12:59:53.710187 | orchestrator | 12:59:53.705 STDOUT terraform:  } 2025-07-12 12:59:53.710191 | orchestrator | 12:59:53.705 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-12 12:59:53.710201 | orchestrator | 12:59:53.705 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-12 12:59:53.710205 | orchestrator | 12:59:53.705 STDOUT terraform:  + content = (known after apply) 2025-07-12 12:59:53.710209 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 12:59:53.710213 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 12:59:53.710216 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 12:59:53.710220 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 12:59:53.710224 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 12:59:53.710228 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 12:59:53.710232 | orchestrator | 12:59:53.705 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 12:59:53.710235 | orchestrator | 12:59:53.705 STDOUT terraform:  + file_permission = "0644" 2025-07-12 12:59:53.710239 | orchestrator | 12:59:53.705 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-12 12:59:53.710243 | orchestrator | 12:59:53.705 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710247 | orchestrator | 12:59:53.705 STDOUT terraform:  } 2025-07-12 12:59:53.710250 | orchestrator | 12:59:53.705 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-12 12:59:53.710254 | orchestrator | 12:59:53.705 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-12 12:59:53.710258 | orchestrator | 12:59:53.705 STDOUT terraform:  + content = (known after apply) 2025-07-12 12:59:53.710261 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 12:59:53.710265 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 12:59:53.710275 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 12:59:53.710279 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 12:59:53.710283 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 12:59:53.710287 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 12:59:53.710290 | orchestrator | 12:59:53.705 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 12:59:53.710294 | orchestrator | 12:59:53.705 STDOUT terraform:  + file_permission = "0644" 2025-07-12 12:59:53.710298 | orchestrator | 12:59:53.705 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-12 12:59:53.710302 | orchestrator | 12:59:53.705 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710305 | orchestrator | 12:59:53.705 STDOUT terraform:  } 2025-07-12 12:59:53.710312 | orchestrator | 12:59:53.705 STDOUT terraform:  # local_file.inventory will be created 2025-07-12 12:59:53.710316 | orchestrator | 12:59:53.705 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-12 12:59:53.710319 | orchestrator | 12:59:53.705 STDOUT terraform:  + content = (known after apply) 2025-07-12 12:59:53.710327 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 12:59:53.710330 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 12:59:53.710334 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 12:59:53.710338 | orchestrator | 12:59:53.705 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 12:59:53.710341 | orchestrator | 12:59:53.706 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 12:59:53.710345 | orchestrator | 12:59:53.706 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 12:59:53.710349 | orchestrator | 12:59:53.706 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 12:59:53.710353 | orchestrator | 12:59:53.706 STDOUT terraform:  + file_permission = "0644" 2025-07-12 12:59:53.710356 | orchestrator | 12:59:53.706 STDOUT terraform:  + filename = "inventory.ci" 2025-07-12 12:59:53.710360 | orchestrator | 12:59:53.706 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710364 | orchestrator | 12:59:53.706 STDOUT terraform:  } 2025-07-12 12:59:53.710368 | orchestrator | 12:59:53.706 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-12 12:59:53.710371 | orchestrator | 12:59:53.706 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-12 12:59:53.710376 | orchestrator | 12:59:53.706 STDOUT terraform:  + content = (sensitive value) 2025-07-12 12:59:53.710379 | orchestrator | 12:59:53.706 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 12:59:53.710383 | orchestrator | 12:59:53.706 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 12:59:53.710387 | orchestrator | 12:59:53.706 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 12:59:53.710391 | orchestrator | 12:59:53.706 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 12:59:53.710394 | orchestrator | 12:59:53.706 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 12:59:53.710398 | orchestrator | 12:59:53.706 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 12:59:53.710402 | orchestrator | 12:59:53.706 STDOUT terraform:  + directory_permission = "0700" 2025-07-12 12:59:53.710405 | orchestrator | 12:59:53.706 STDOUT terraform:  + file_permission = "0600" 2025-07-12 12:59:53.710409 | orchestrator | 12:59:53.706 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-12 12:59:53.710413 | orchestrator | 12:59:53.706 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710417 | orchestrator | 12:59:53.706 STDOUT terraform:  } 2025-07-12 12:59:53.710423 | orchestrator | 12:59:53.706 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-12 12:59:53.710427 | orchestrator | 12:59:53.706 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-12 12:59:53.710431 | orchestrator | 12:59:53.706 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710435 | orchestrator | 12:59:53.706 STDOUT terraform:  } 2025-07-12 12:59:53.710439 | orchestrator | 12:59:53.706 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-12 12:59:53.710449 | orchestrator | 12:59:53.706 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-12 12:59:53.710453 | orchestrator | 12:59:53.706 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710456 | orchestrator | 12:59:53.706 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710460 | orchestrator | 12:59:53.706 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710464 | orchestrator | 12:59:53.706 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.710468 | orchestrator | 12:59:53.706 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710472 | orchestrator | 12:59:53.706 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-12 12:59:53.710475 | orchestrator | 12:59:53.706 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710479 | orchestrator | 12:59:53.707 STDOUT terraform:  + size = 80 2025-07-12 12:59:53.710483 | orchestrator | 12:59:53.707 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.710487 | orchestrator | 12:59:53.707 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.710490 | orchestrator | 12:59:53.707 STDOUT terraform:  } 2025-07-12 12:59:53.710494 | orchestrator | 12:59:53.707 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-12 12:59:53.710498 | orchestrator | 12:59:53.707 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:53.710502 | orchestrator | 12:59:53.707 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710505 | orchestrator | 12:59:53.707 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710509 | orchestrator | 12:59:53.707 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710513 | orchestrator | 12:59:53.707 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.710517 | orchestrator | 12:59:53.707 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710520 | orchestrator | 12:59:53.707 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-12 12:59:53.710524 | orchestrator | 12:59:53.707 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710528 | orchestrator | 12:59:53.707 STDOUT terraform:  + size = 80 2025-07-12 12:59:53.710532 | orchestrator | 12:59:53.707 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.710535 | orchestrator | 12:59:53.707 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.710539 | orchestrator | 12:59:53.707 STDOUT terraform:  } 2025-07-12 12:59:53.710543 | orchestrator | 12:59:53.707 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-12 12:59:53.710547 | orchestrator | 12:59:53.707 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:53.710550 | orchestrator | 12:59:53.707 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710558 | orchestrator | 12:59:53.707 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710561 | orchestrator | 12:59:53.707 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710565 | orchestrator | 12:59:53.707 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.710569 | orchestrator | 12:59:53.707 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710575 | orchestrator | 12:59:53.707 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-12 12:59:53.710579 | orchestrator | 12:59:53.707 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710583 | orchestrator | 12:59:53.707 STDOUT terraform:  + size = 80 2025-07-12 12:59:53.710587 | orchestrator | 12:59:53.707 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.710590 | orchestrator | 12:59:53.707 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.710594 | orchestrator | 12:59:53.707 STDOUT terraform:  } 2025-07-12 12:59:53.710598 | orchestrator | 12:59:53.707 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-12 12:59:53.710602 | orchestrator | 12:59:53.707 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:53.710606 | orchestrator | 12:59:53.707 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710612 | orchestrator | 12:59:53.707 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710616 | orchestrator | 12:59:53.707 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710619 | orchestrator | 12:59:53.707 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.710623 | orchestrator | 12:59:53.707 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710627 | orchestrator | 12:59:53.707 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-12 12:59:53.710631 | orchestrator | 12:59:53.708 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710647 | orchestrator | 12:59:53.708 STDOUT terraform:  + size = 80 2025-07-12 12:59:53.710651 | orchestrator | 12:59:53.708 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.710655 | orchestrator | 12:59:53.708 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.710659 | orchestrator | 12:59:53.708 STDOUT terraform:  } 2025-07-12 12:59:53.710662 | orchestrator | 12:59:53.708 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-12 12:59:53.710666 | orchestrator | 12:59:53.708 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:53.710670 | orchestrator | 12:59:53.708 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710674 | orchestrator | 12:59:53.708 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710677 | orchestrator | 12:59:53.708 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710681 | orchestrator | 12:59:53.708 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.710685 | orchestrator | 12:59:53.708 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710692 | orchestrator | 12:59:53.708 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-12 12:59:53.710695 | orchestrator | 12:59:53.708 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710699 | orchestrator | 12:59:53.708 STDOUT terraform:  + size = 80 2025-07-12 12:59:53.710703 | orchestrator | 12:59:53.708 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.710706 | orchestrator | 12:59:53.708 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.710710 | orchestrator | 12:59:53.708 STDOUT terraform:  } 2025-07-12 12:59:53.710714 | orchestrator | 12:59:53.708 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-12 12:59:53.710721 | orchestrator | 12:59:53.708 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:53.710725 | orchestrator | 12:59:53.708 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710729 | orchestrator | 12:59:53.708 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710732 | orchestrator | 12:59:53.708 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710738 | orchestrator | 12:59:53.708 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.710742 | orchestrator | 12:59:53.708 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710746 | orchestrator | 12:59:53.708 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-12 12:59:53.710749 | orchestrator | 12:59:53.708 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710753 | orchestrator | 12:59:53.708 STDOUT terraform:  + size = 80 2025-07-12 12:59:53.710757 | orchestrator | 12:59:53.708 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.710761 | orchestrator | 12:59:53.708 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.710764 | orchestrator | 12:59:53.708 STDOUT terraform:  } 2025-07-12 12:59:53.710768 | orchestrator | 12:59:53.708 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-12 12:59:53.710772 | orchestrator | 12:59:53.708 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 12:59:53.710776 | orchestrator | 12:59:53.708 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710779 | orchestrator | 12:59:53.709 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710783 | orchestrator | 12:59:53.709 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710787 | orchestrator | 12:59:53.709 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.710791 | orchestrator | 12:59:53.709 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710794 | orchestrator | 12:59:53.709 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-12 12:59:53.710798 | orchestrator | 12:59:53.709 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710802 | orchestrator | 12:59:53.709 STDOUT terraform:  + size = 80 2025-07-12 12:59:53.710809 | orchestrator | 12:59:53.709 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.710812 | orchestrator | 12:59:53.709 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.710816 | orchestrator | 12:59:53.709 STDOUT terraform:  } 2025-07-12 12:59:53.710820 | orchestrator | 12:59:53.709 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-12 12:59:53.710824 | orchestrator | 12:59:53.709 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:53.710830 | orchestrator | 12:59:53.709 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710834 | orchestrator | 12:59:53.709 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710837 | orchestrator | 12:59:53.709 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710841 | orchestrator | 12:59:53.709 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710845 | orchestrator | 12:59:53.709 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-12 12:59:53.710849 | orchestrator | 12:59:53.709 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710852 | orchestrator | 12:59:53.709 STDOUT terraform:  + size = 20 2025-07-12 12:59:53.710856 | orchestrator | 12:59:53.709 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.710860 | orchestrator | 12:59:53.709 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.710863 | orchestrator | 12:59:53.709 STDOUT terraform:  } 2025-07-12 12:59:53.710867 | orchestrator | 12:59:53.709 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-12 12:59:53.710871 | orchestrator | 12:59:53.709 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:53.710875 | orchestrator | 12:59:53.709 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710879 | orchestrator | 12:59:53.709 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710884 | orchestrator | 12:59:53.709 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.710888 | orchestrator | 12:59:53.709 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.710892 | orchestrator | 12:59:53.709 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-12 12:59:53.710896 | orchestrator | 12:59:53.709 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.710899 | orchestrator | 12:59:53.709 STDOUT terraform:  + size = 20 2025-07-12 12:59:53.710903 | orchestrator | 12:59:53.709 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.710907 | orchestrator | 12:59:53.709 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.710911 | orchestrator | 12:59:53.709 STDOUT terraform:  } 2025-07-12 12:59:53.710915 | orchestrator | 12:59:53.709 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-12 12:59:53.710918 | orchestrator | 12:59:53.709 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:53.710922 | orchestrator | 12:59:53.709 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.710929 | orchestrator | 12:59:53.709 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.710997 | orchestrator | 12:59:53.710 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.711061 | orchestrator | 12:59:53.711 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.711122 | orchestrator | 12:59:53.711 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-12 12:59:53.711179 | orchestrator | 12:59:53.711 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.711209 | orchestrator | 12:59:53.711 STDOUT terraform:  + size = 20 2025-07-12 12:59:53.711253 | orchestrator | 12:59:53.711 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.711294 | orchestrator | 12:59:53.711 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.711315 | orchestrator | 12:59:53.711 STDOUT terraform:  } 2025-07-12 12:59:53.711379 | orchestrator | 12:59:53.711 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-12 12:59:53.711443 | orchestrator | 12:59:53.711 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:53.711495 | orchestrator | 12:59:53.711 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.711532 | orchestrator | 12:59:53.711 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.711582 | orchestrator | 12:59:53.711 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.711628 | orchestrator | 12:59:53.711 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.711698 | orchestrator | 12:59:53.711 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-12 12:59:53.711754 | orchestrator | 12:59:53.711 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.711782 | orchestrator | 12:59:53.711 STDOUT terraform:  + size = 20 2025-07-12 12:59:53.711811 | orchestrator | 12:59:53.711 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.711858 | orchestrator | 12:59:53.711 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.711878 | orchestrator | 12:59:53.711 STDOUT terraform:  } 2025-07-12 12:59:53.711941 | orchestrator | 12:59:53.711 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-12 12:59:53.712004 | orchestrator | 12:59:53.711 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:53.712047 | orchestrator | 12:59:53.712 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.712096 | orchestrator | 12:59:53.712 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.712152 | orchestrator | 12:59:53.712 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.712192 | orchestrator | 12:59:53.712 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.712250 | orchestrator | 12:59:53.712 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-12 12:59:53.712305 | orchestrator | 12:59:53.712 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.712337 | orchestrator | 12:59:53.712 STDOUT terraform:  + size = 20 2025-07-12 12:59:53.712380 | orchestrator | 12:59:53.712 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.712410 | orchestrator | 12:59:53.712 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.712430 | orchestrator | 12:59:53.712 STDOUT terraform:  } 2025-07-12 12:59:53.712494 | orchestrator | 12:59:53.712 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-12 12:59:53.712557 | orchestrator | 12:59:53.712 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:53.712613 | orchestrator | 12:59:53.712 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.712668 | orchestrator | 12:59:53.712 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.712727 | orchestrator | 12:59:53.712 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.712784 | orchestrator | 12:59:53.712 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.712828 | orchestrator | 12:59:53.712 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-12 12:59:53.712883 | orchestrator | 12:59:53.712 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.712922 | orchestrator | 12:59:53.712 STDOUT terraform:  + size = 20 2025-07-12 12:59:53.712960 | orchestrator | 12:59:53.712 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.712992 | orchestrator | 12:59:53.712 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.713027 | orchestrator | 12:59:53.713 STDOUT terraform:  } 2025-07-12 12:59:53.713093 | orchestrator | 12:59:53.713 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-12 12:59:53.713142 | orchestrator | 12:59:53.713 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:53.713198 | orchestrator | 12:59:53.713 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.713230 | orchestrator | 12:59:53.713 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.713286 | orchestrator | 12:59:53.713 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.713342 | orchestrator | 12:59:53.713 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.713386 | orchestrator | 12:59:53.713 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-12 12:59:53.713439 | orchestrator | 12:59:53.713 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.713466 | orchestrator | 12:59:53.713 STDOUT terraform:  + size = 20 2025-07-12 12:59:53.713510 | orchestrator | 12:59:53.713 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.713540 | orchestrator | 12:59:53.713 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.713572 | orchestrator | 12:59:53.713 STDOUT terraform:  } 2025-07-12 12:59:53.713622 | orchestrator | 12:59:53.713 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-12 12:59:53.713701 | orchestrator | 12:59:53.713 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:53.713762 | orchestrator | 12:59:53.713 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.713802 | orchestrator | 12:59:53.713 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.713849 | orchestrator | 12:59:53.713 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.713904 | orchestrator | 12:59:53.713 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.713951 | orchestrator | 12:59:53.713 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-12 12:59:53.714003 | orchestrator | 12:59:53.713 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.714066 | orchestrator | 12:59:53.714 STDOUT terraform:  + size = 20 2025-07-12 12:59:53.714099 | orchestrator | 12:59:53.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.714143 | orchestrator | 12:59:53.714 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.714164 | orchestrator | 12:59:53.714 STDOUT terraform:  } 2025-07-12 12:59:53.714227 | orchestrator | 12:59:53.714 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-12 12:59:53.714301 | orchestrator | 12:59:53.714 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 12:59:53.714356 | orchestrator | 12:59:53.714 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 12:59:53.714393 | orchestrator | 12:59:53.714 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.714450 | orchestrator | 12:59:53.714 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.714493 | orchestrator | 12:59:53.714 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 12:59:53.714556 | orchestrator | 12:59:53.714 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-12 12:59:53.714617 | orchestrator | 12:59:53.714 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.714657 | orchestrator | 12:59:53.714 STDOUT terraform:  + size = 20 2025-07-12 12:59:53.714702 | orchestrator | 12:59:53.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 12:59:53.714733 | orchestrator | 12:59:53.714 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 12:59:53.714767 | orchestrator | 12:59:53.714 STDOUT terraform:  } 2025-07-12 12:59:53.714819 | orchestrator | 12:59:53.714 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-12 12:59:53.714883 | orchestrator | 12:59:53.714 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-12 12:59:53.714932 | orchestrator | 12:59:53.714 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:53.714979 | orchestrator | 12:59:53.714 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:53.715021 | orchestrator | 12:59:53.714 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:53.715063 | orchestrator | 12:59:53.715 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.715118 | orchestrator | 12:59:53.715 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.715152 | orchestrator | 12:59:53.715 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:53.715194 | orchestrator | 12:59:53.715 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:53.715234 | orchestrator | 12:59:53.715 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:53.715271 | orchestrator | 12:59:53.715 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-12 12:59:53.715316 | orchestrator | 12:59:53.715 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:53.715357 | orchestrator | 12:59:53.715 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:53.715399 | orchestrator | 12:59:53.715 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.715441 | orchestrator | 12:59:53.715 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.715482 | orchestrator | 12:59:53.715 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:53.715515 | orchestrator | 12:59:53.715 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:53.715553 | orchestrator | 12:59:53.715 STDOUT terraform:  + name = "testbed-manager" 2025-07-12 12:59:53.715584 | orchestrator | 12:59:53.715 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:53.715625 | orchestrator | 12:59:53.715 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.715705 | orchestrator | 12:59:53.715 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:53.715736 | orchestrator | 12:59:53.715 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:53.715778 | orchestrator | 12:59:53.715 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:53.715819 | orchestrator | 12:59:53.715 STDOUT terraform:  + user_data = (sensitive value) 2025-07-12 12:59:53.715844 | orchestrator | 12:59:53.715 STDOUT terraform:  + block_device { 2025-07-12 12:59:53.715877 | orchestrator | 12:59:53.715 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:53.715914 | orchestrator | 12:59:53.715 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:53.715950 | orchestrator | 12:59:53.715 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:53.715985 | orchestrator | 12:59:53.715 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:53.716023 | orchestrator | 12:59:53.715 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:53.716069 | orchestrator | 12:59:53.716 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.716091 | orchestrator | 12:59:53.716 STDOUT terraform:  } 2025-07-12 12:59:53.716114 | orchestrator | 12:59:53.716 STDOUT terraform:  + network { 2025-07-12 12:59:53.716141 | orchestrator | 12:59:53.716 STDOUT terraform:  + access_network = false 2025-07-12 12:59:53.716178 | orchestrator | 12:59:53.716 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:53.716215 | orchestrator | 12:59:53.716 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:53.716256 | orchestrator | 12:59:53.716 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:53.716299 | orchestrator | 12:59:53.716 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:53.716339 | orchestrator | 12:59:53.716 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:53.716392 | orchestrator | 12:59:53.716 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.716414 | orchestrator | 12:59:53.716 STDOUT terraform:  } 2025-07-12 12:59:53.716435 | orchestrator | 12:59:53.716 STDOUT terraform:  } 2025-07-12 12:59:53.716484 | orchestrator | 12:59:53.716 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-12 12:59:53.716532 | orchestrator | 12:59:53.716 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:53.716575 | orchestrator | 12:59:53.716 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:53.716628 | orchestrator | 12:59:53.716 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:53.716678 | orchestrator | 12:59:53.716 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:53.716721 | orchestrator | 12:59:53.716 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.716750 | orchestrator | 12:59:53.716 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.716778 | orchestrator | 12:59:53.716 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:53.716818 | orchestrator | 12:59:53.716 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:53.716857 | orchestrator | 12:59:53.716 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:53.716892 | orchestrator | 12:59:53.716 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:53.716922 | orchestrator | 12:59:53.716 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:53.716961 | orchestrator | 12:59:53.716 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:53.717003 | orchestrator | 12:59:53.716 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.717043 | orchestrator | 12:59:53.717 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.717084 | orchestrator | 12:59:53.717 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:53.717115 | orchestrator | 12:59:53.717 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:53.717150 | orchestrator | 12:59:53.717 STDOUT terraform:  + name = "testbed-node-0" 2025-07-12 12:59:53.717180 | orchestrator | 12:59:53.717 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:53.717220 | orchestrator | 12:59:53.717 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.717261 | orchestrator | 12:59:53.717 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:53.717289 | orchestrator | 12:59:53.717 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:53.717329 | orchestrator | 12:59:53.717 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:53.717384 | orchestrator | 12:59:53.717 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:53.717408 | orchestrator | 12:59:53.717 STDOUT terraform:  + block_device { 2025-07-12 12:59:53.717443 | orchestrator | 12:59:53.717 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:53.717479 | orchestrator | 12:59:53.717 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:53.717513 | orchestrator | 12:59:53.717 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:53.717548 | orchestrator | 12:59:53.717 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:53.717583 | orchestrator | 12:59:53.717 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:53.717627 | orchestrator | 12:59:53.717 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.717677 | orchestrator | 12:59:53.717 STDOUT terraform:  } 2025-07-12 12:59:53.717699 | orchestrator | 12:59:53.717 STDOUT terraform:  + network { 2025-07-12 12:59:53.717738 | orchestrator | 12:59:53.717 STDOUT terraform:  + access_network = false 2025-07-12 12:59:53.717776 | orchestrator | 12:59:53.717 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:53.717812 | orchestrator | 12:59:53.717 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:53.717853 | orchestrator | 12:59:53.717 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:53.717891 | orchestrator | 12:59:53.717 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:53.717928 | orchestrator | 12:59:53.717 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:53.717965 | orchestrator | 12:59:53.717 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.717985 | orchestrator | 12:59:53.717 STDOUT terraform:  } 2025-07-12 12:59:53.718005 | orchestrator | 12:59:53.717 STDOUT terraform:  } 2025-07-12 12:59:53.718067 | orchestrator | 12:59:53.718 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-12 12:59:53.718116 | orchestrator | 12:59:53.718 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:53.718157 | orchestrator | 12:59:53.718 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:53.718199 | orchestrator | 12:59:53.718 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:53.718241 | orchestrator | 12:59:53.718 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:53.718285 | orchestrator | 12:59:53.718 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.718315 | orchestrator | 12:59:53.718 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.718343 | orchestrator | 12:59:53.718 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:53.718384 | orchestrator | 12:59:53.718 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:53.718425 | orchestrator | 12:59:53.718 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:53.718466 | orchestrator | 12:59:53.718 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:53.718496 | orchestrator | 12:59:53.718 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:53.718540 | orchestrator | 12:59:53.718 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:53.718586 | orchestrator | 12:59:53.718 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.718628 | orchestrator | 12:59:53.718 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.718682 | orchestrator | 12:59:53.718 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:53.718714 | orchestrator | 12:59:53.718 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:53.718752 | orchestrator | 12:59:53.718 STDOUT terraform:  + name = "testbed-node-1" 2025-07-12 12:59:53.718782 | orchestrator | 12:59:53.718 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:53.718826 | orchestrator | 12:59:53.718 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.718867 | orchestrator | 12:59:53.718 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:53.718898 | orchestrator | 12:59:53.718 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:53.718939 | orchestrator | 12:59:53.718 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:53.718995 | orchestrator | 12:59:53.718 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:53.719020 | orchestrator | 12:59:53.719 STDOUT terraform:  + block_device { 2025-07-12 12:59:53.719052 | orchestrator | 12:59:53.719 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:53.719091 | orchestrator | 12:59:53.719 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:53.719127 | orchestrator | 12:59:53.719 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:53.719162 | orchestrator | 12:59:53.719 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:53.719198 | orchestrator | 12:59:53.719 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:53.719243 | orchestrator | 12:59:53.719 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.719264 | orchestrator | 12:59:53.719 STDOUT terraform:  } 2025-07-12 12:59:53.719285 | orchestrator | 12:59:53.719 STDOUT terraform:  + network { 2025-07-12 12:59:53.719312 | orchestrator | 12:59:53.719 STDOUT terraform:  + access_network = false 2025-07-12 12:59:53.719350 | orchestrator | 12:59:53.719 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:53.719387 | orchestrator | 12:59:53.719 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:53.719427 | orchestrator | 12:59:53.719 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:53.719467 | orchestrator | 12:59:53.719 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:53.719507 | orchestrator | 12:59:53.719 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:53.719545 | orchestrator | 12:59:53.719 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.719566 | orchestrator | 12:59:53.719 STDOUT terraform:  } 2025-07-12 12:59:53.719586 | orchestrator | 12:59:53.719 STDOUT terraform:  } 2025-07-12 12:59:53.719658 | orchestrator | 12:59:53.719 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-12 12:59:53.719710 | orchestrator | 12:59:53.719 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:53.719759 | orchestrator | 12:59:53.719 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:53.719801 | orchestrator | 12:59:53.719 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:53.719842 | orchestrator | 12:59:53.719 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:53.719886 | orchestrator | 12:59:53.719 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.719917 | orchestrator | 12:59:53.719 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.719946 | orchestrator | 12:59:53.719 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:53.719987 | orchestrator | 12:59:53.719 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:53.720029 | orchestrator | 12:59:53.719 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:53.720066 | orchestrator | 12:59:53.720 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:53.720131 | orchestrator | 12:59:53.720 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:53.720177 | orchestrator | 12:59:53.720 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:53.720221 | orchestrator | 12:59:53.720 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.725649 | orchestrator | 12:59:53.720 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.725748 | orchestrator | 12:59:53.725 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:53.725787 | orchestrator | 12:59:53.725 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:53.725846 | orchestrator | 12:59:53.725 STDOUT terraform:  + name = "testbed-node-2" 2025-07-12 12:59:53.725881 | orchestrator | 12:59:53.725 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:53.725923 | orchestrator | 12:59:53.725 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.725964 | orchestrator | 12:59:53.725 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:53.725994 | orchestrator | 12:59:53.725 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:53.726060 | orchestrator | 12:59:53.726 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:53.726119 | orchestrator | 12:59:53.726 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:53.726148 | orchestrator | 12:59:53.726 STDOUT terraform:  + block_device { 2025-07-12 12:59:53.726181 | orchestrator | 12:59:53.726 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:53.726218 | orchestrator | 12:59:53.726 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:53.726260 | orchestrator | 12:59:53.726 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:53.729818 | orchestrator | 12:59:53.726 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:53.729905 | orchestrator | 12:59:53.729 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:53.729960 | orchestrator | 12:59:53.729 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.729997 | orchestrator | 12:59:53.729 STDOUT terraform:  } 2025-07-12 12:59:53.730041 | orchestrator | 12:59:53.730 STDOUT terraform:  + network { 2025-07-12 12:59:53.730072 | orchestrator | 12:59:53.730 STDOUT terraform:  + access_network = false 2025-07-12 12:59:53.730113 | orchestrator | 12:59:53.730 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:53.730153 | orchestrator | 12:59:53.730 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:53.730212 | orchestrator | 12:59:53.730 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:53.730254 | orchestrator | 12:59:53.730 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:53.730293 | orchestrator | 12:59:53.730 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:53.730330 | orchestrator | 12:59:53.730 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.730354 | orchestrator | 12:59:53.730 STDOUT terraform:  } 2025-07-12 12:59:53.730378 | orchestrator | 12:59:53.730 STDOUT terraform:  } 2025-07-12 12:59:53.730428 | orchestrator | 12:59:53.730 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-12 12:59:53.730476 | orchestrator | 12:59:53.730 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:53.730519 | orchestrator | 12:59:53.730 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:53.730559 | orchestrator | 12:59:53.730 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:53.730601 | orchestrator | 12:59:53.730 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:53.730680 | orchestrator | 12:59:53.730 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.730714 | orchestrator | 12:59:53.730 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.730741 | orchestrator | 12:59:53.730 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:53.730783 | orchestrator | 12:59:53.730 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:53.730825 | orchestrator | 12:59:53.730 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:53.730863 | orchestrator | 12:59:53.730 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:53.730892 | orchestrator | 12:59:53.730 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:53.730932 | orchestrator | 12:59:53.730 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:53.730974 | orchestrator | 12:59:53.730 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.731016 | orchestrator | 12:59:53.730 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.731059 | orchestrator | 12:59:53.731 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:53.731090 | orchestrator | 12:59:53.731 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:53.731127 | orchestrator | 12:59:53.731 STDOUT terraform:  + name = "testbed-node-3" 2025-07-12 12:59:53.731159 | orchestrator | 12:59:53.731 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:53.731217 | orchestrator | 12:59:53.731 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.731259 | orchestrator | 12:59:53.731 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:53.731290 | orchestrator | 12:59:53.731 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:53.731331 | orchestrator | 12:59:53.731 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:53.731389 | orchestrator | 12:59:53.731 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:53.731413 | orchestrator | 12:59:53.731 STDOUT terraform:  + block_device { 2025-07-12 12:59:53.731444 | orchestrator | 12:59:53.731 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:53.731479 | orchestrator | 12:59:53.731 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:53.731514 | orchestrator | 12:59:53.731 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:53.731548 | orchestrator | 12:59:53.731 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:53.731583 | orchestrator | 12:59:53.731 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:53.731628 | orchestrator | 12:59:53.731 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.731661 | orchestrator | 12:59:53.731 STDOUT terraform:  } 2025-07-12 12:59:53.731683 | orchestrator | 12:59:53.731 STDOUT terraform:  + network { 2025-07-12 12:59:53.731709 | orchestrator | 12:59:53.731 STDOUT terraform:  + access_network = false 2025-07-12 12:59:53.731746 | orchestrator | 12:59:53.731 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:53.731783 | orchestrator | 12:59:53.731 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:53.731821 | orchestrator | 12:59:53.731 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:53.731859 | orchestrator | 12:59:53.731 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:53.731896 | orchestrator | 12:59:53.731 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:53.731933 | orchestrator | 12:59:53.731 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.731953 | orchestrator | 12:59:53.731 STDOUT terraform:  } 2025-07-12 12:59:53.731973 | orchestrator | 12:59:53.731 STDOUT terraform:  } 2025-07-12 12:59:53.732021 | orchestrator | 12:59:53.731 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-12 12:59:53.732069 | orchestrator | 12:59:53.732 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:53.732109 | orchestrator | 12:59:53.732 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:53.732149 | orchestrator | 12:59:53.732 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:53.732188 | orchestrator | 12:59:53.732 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:53.732229 | orchestrator | 12:59:53.732 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.732258 | orchestrator | 12:59:53.732 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.732289 | orchestrator | 12:59:53.732 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:53.732334 | orchestrator | 12:59:53.732 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:53.732412 | orchestrator | 12:59:53.732 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:53.741264 | orchestrator | 12:59:53.732 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:53.741352 | orchestrator | 12:59:53.741 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:53.741407 | orchestrator | 12:59:53.741 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:53.741465 | orchestrator | 12:59:53.741 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.741993 | orchestrator | 12:59:53.741 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.744998 | orchestrator | 12:59:53.741 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:53.745095 | orchestrator | 12:59:53.745 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:53.745138 | orchestrator | 12:59:53.745 STDOUT terraform:  + name = "testbed-node-4" 2025-07-12 12:59:53.745170 | orchestrator | 12:59:53.745 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:53.745214 | orchestrator | 12:59:53.745 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.745261 | orchestrator | 12:59:53.745 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:53.753239 | orchestrator | 12:59:53.753 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:53.753325 | orchestrator | 12:59:53.753 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:53.753446 | orchestrator | 12:59:53.753 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:53.753482 | orchestrator | 12:59:53.753 STDOUT terraform:  + block_device { 2025-07-12 12:59:53.753532 | orchestrator | 12:59:53.753 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:53.753573 | orchestrator | 12:59:53.753 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:53.753625 | orchestrator | 12:59:53.753 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:53.753713 | orchestrator | 12:59:53.753 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:53.753769 | orchestrator | 12:59:53.753 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:53.753814 | orchestrator | 12:59:53.753 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.753852 | orchestrator | 12:59:53.753 STDOUT terraform:  } 2025-07-12 12:59:53.753873 | orchestrator | 12:59:53.753 STDOUT terraform:  + network { 2025-07-12 12:59:53.753908 | orchestrator | 12:59:53.753 STDOUT terraform:  + access_network = false 2025-07-12 12:59:53.753952 | orchestrator | 12:59:53.753 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:53.754002 | orchestrator | 12:59:53.753 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:53.754066 | orchestrator | 12:59:53.754 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:53.754134 | orchestrator | 12:59:53.754 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:53.754182 | orchestrator | 12:59:53.754 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:53.754228 | orchestrator | 12:59:53.754 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.754252 | orchestrator | 12:59:53.754 STDOUT terraform:  } 2025-07-12 12:59:53.754288 | orchestrator | 12:59:53.754 STDOUT terraform:  } 2025-07-12 12:59:53.754366 | orchestrator | 12:59:53.754 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-12 12:59:53.754426 | orchestrator | 12:59:53.754 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 12:59:53.754473 | orchestrator | 12:59:53.754 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 12:59:53.754527 | orchestrator | 12:59:53.754 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 12:59:53.754569 | orchestrator | 12:59:53.754 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 12:59:53.754625 | orchestrator | 12:59:53.754 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.754683 | orchestrator | 12:59:53.754 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 12:59:53.754712 | orchestrator | 12:59:53.754 STDOUT terraform:  + config_drive = true 2025-07-12 12:59:53.754767 | orchestrator | 12:59:53.754 STDOUT terraform:  + created = (known after apply) 2025-07-12 12:59:53.754808 | orchestrator | 12:59:53.754 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 12:59:53.754858 | orchestrator | 12:59:53.754 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 12:59:53.754889 | orchestrator | 12:59:53.754 STDOUT terraform:  + force_delete = false 2025-07-12 12:59:53.754943 | orchestrator | 12:59:53.754 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 12:59:53.755000 | orchestrator | 12:59:53.754 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.755042 | orchestrator | 12:59:53.755 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 12:59:53.755098 | orchestrator | 12:59:53.755 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 12:59:53.755138 | orchestrator | 12:59:53.755 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 12:59:53.755181 | orchestrator | 12:59:53.755 STDOUT terraform:  + name = "testbed-node-5" 2025-07-12 12:59:53.755211 | orchestrator | 12:59:53.755 STDOUT terraform:  + power_state = "active" 2025-07-12 12:59:53.755265 | orchestrator | 12:59:53.755 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.755319 | orchestrator | 12:59:53.755 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 12:59:53.755350 | orchestrator | 12:59:53.755 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 12:59:53.755405 | orchestrator | 12:59:53.755 STDOUT terraform:  + updated = (known after apply) 2025-07-12 12:59:53.755475 | orchestrator | 12:59:53.755 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 12:59:53.755500 | orchestrator | 12:59:53.755 STDOUT terraform:  + block_device { 2025-07-12 12:59:53.755550 | orchestrator | 12:59:53.755 STDOUT terraform:  + boot_index = 0 2025-07-12 12:59:53.755585 | orchestrator | 12:59:53.755 STDOUT terraform:  + delete_on_termination = false 2025-07-12 12:59:53.755663 | orchestrator | 12:59:53.755 STDOUT terraform:  + destination_type = "volume" 2025-07-12 12:59:53.755716 | orchestrator | 12:59:53.755 STDOUT terraform:  + multiattach = false 2025-07-12 12:59:53.755754 | orchestrator | 12:59:53.755 STDOUT terraform:  + source_type = "volume" 2025-07-12 12:59:53.755814 | orchestrator | 12:59:53.755 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.755835 | orchestrator | 12:59:53.755 STDOUT terraform:  } 2025-07-12 12:59:53.755871 | orchestrator | 12:59:53.755 STDOUT terraform:  + network { 2025-07-12 12:59:53.755899 | orchestrator | 12:59:53.755 STDOUT terraform:  + access_network = false 2025-07-12 12:59:53.755951 | orchestrator | 12:59:53.755 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 12:59:53.755991 | orchestrator | 12:59:53.755 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 12:59:53.756044 | orchestrator | 12:59:53.756 STDOUT terraform:  + mac = (known after apply) 2025-07-12 12:59:53.756096 | orchestrator | 12:59:53.756 STDOUT terraform:  + name = (known after apply) 2025-07-12 12:59:53.756142 | orchestrator | 12:59:53.756 STDOUT terraform:  + port = (known after apply) 2025-07-12 12:59:53.756197 | orchestrator | 12:59:53.756 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 12:59:53.756217 | orchestrator | 12:59:53.756 STDOUT terraform:  } 2025-07-12 12:59:53.756253 | orchestrator | 12:59:53.756 STDOUT terraform:  } 2025-07-12 12:59:53.756294 | orchestrator | 12:59:53.756 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-12 12:59:53.756349 | orchestrator | 12:59:53.756 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-12 12:59:53.756383 | orchestrator | 12:59:53.756 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-12 12:59:53.756432 | orchestrator | 12:59:53.756 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.756461 | orchestrator | 12:59:53.756 STDOUT terraform:  + name = "testbed" 2025-07-12 12:59:53.756506 | orchestrator | 12:59:53.756 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 12:59:53.756541 | orchestrator | 12:59:53.756 STDOUT terraform:  + public_key = (known after apply) 2025-07-12 12:59:53.756589 | orchestrator | 12:59:53.756 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.756645 | orchestrator | 12:59:53.756 STDOUT terraform:  + user_id = (known after apply) 2025-07-12 12:59:53.756676 | orchestrator | 12:59:53.756 STDOUT terraform:  } 2025-07-12 12:59:53.756746 | orchestrator | 12:59:53.756 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-12 12:59:53.756810 | orchestrator | 12:59:53.756 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:53.756850 | orchestrator | 12:59:53.756 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:53.756885 | orchestrator | 12:59:53.756 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.756938 | orchestrator | 12:59:53.756 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:53.756982 | orchestrator | 12:59:53.756 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.757025 | orchestrator | 12:59:53.756 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:53.757045 | orchestrator | 12:59:53.757 STDOUT terraform:  } 2025-07-12 12:59:53.757113 | orchestrator | 12:59:53.757 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-12 12:59:53.757182 | orchestrator | 12:59:53.757 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:53.757233 | orchestrator | 12:59:53.757 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:53.757269 | orchestrator | 12:59:53.757 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.757318 | orchestrator | 12:59:53.757 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:53.757352 | orchestrator | 12:59:53.757 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.757395 | orchestrator | 12:59:53.757 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:53.757421 | orchestrator | 12:59:53.757 STDOUT terraform:  } 2025-07-12 12:59:53.757476 | orchestrator | 12:59:53.757 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-12 12:59:53.757843 | orchestrator | 12:59:53.757 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:53.757895 | orchestrator | 12:59:53.757 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:53.757933 | orchestrator | 12:59:53.757 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.757967 | orchestrator | 12:59:53.757 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:53.758002 | orchestrator | 12:59:53.757 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.758052 | orchestrator | 12:59:53.758 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:53.758074 | orchestrator | 12:59:53.758 STDOUT terraform:  } 2025-07-12 12:59:53.758139 | orchestrator | 12:59:53.758 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-12 12:59:53.758200 | orchestrator | 12:59:53.758 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:53.758239 | orchestrator | 12:59:53.758 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:53.758274 | orchestrator | 12:59:53.758 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.758308 | orchestrator | 12:59:53.758 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:53.758343 | orchestrator | 12:59:53.758 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.758377 | orchestrator | 12:59:53.758 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:53.758397 | orchestrator | 12:59:53.758 STDOUT terraform:  } 2025-07-12 12:59:53.758451 | orchestrator | 12:59:53.758 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-12 12:59:53.758516 | orchestrator | 12:59:53.758 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:53.758550 | orchestrator | 12:59:53.758 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:53.758585 | orchestrator | 12:59:53.758 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.758620 | orchestrator | 12:59:53.758 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:53.758673 | orchestrator | 12:59:53.758 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.758711 | orchestrator | 12:59:53.758 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:53.758731 | orchestrator | 12:59:53.758 STDOUT terraform:  } 2025-07-12 12:59:53.758788 | orchestrator | 12:59:53.758 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-12 12:59:53.758854 | orchestrator | 12:59:53.758 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:53.758892 | orchestrator | 12:59:53.758 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:53.758927 | orchestrator | 12:59:53.758 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.758960 | orchestrator | 12:59:53.758 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:53.758994 | orchestrator | 12:59:53.758 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.759028 | orchestrator | 12:59:53.759 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:53.759049 | orchestrator | 12:59:53.759 STDOUT terraform:  } 2025-07-12 12:59:53.759103 | orchestrator | 12:59:53.759 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-12 12:59:53.759156 | orchestrator | 12:59:53.759 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:53.759191 | orchestrator | 12:59:53.759 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:53.759224 | orchestrator | 12:59:53.759 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.759258 | orchestrator | 12:59:53.759 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:53.759292 | orchestrator | 12:59:53.759 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.759325 | orchestrator | 12:59:53.759 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:53.759345 | orchestrator | 12:59:53.759 STDOUT terraform:  } 2025-07-12 12:59:53.759399 | orchestrator | 12:59:53.759 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-12 12:59:53.759452 | orchestrator | 12:59:53.759 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:53.759491 | orchestrator | 12:59:53.759 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:53.759525 | orchestrator | 12:59:53.759 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.759558 | orchestrator | 12:59:53.759 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:53.759591 | orchestrator | 12:59:53.759 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.759664 | orchestrator | 12:59:53.759 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:53.759688 | orchestrator | 12:59:53.759 STDOUT terraform:  } 2025-07-12 12:59:53.759747 | orchestrator | 12:59:53.759 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-12 12:59:53.759802 | orchestrator | 12:59:53.759 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 12:59:53.759863 | orchestrator | 12:59:53.759 STDOUT terraform:  + device = (known after apply) 2025-07-12 12:59:53.759901 | orchestrator | 12:59:53.759 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.759938 | orchestrator | 12:59:53.759 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 12:59:53.760349 | orchestrator | 12:59:53.760 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.760425 | orchestrator | 12:59:53.760 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 12:59:53.760449 | orchestrator | 12:59:53.760 STDOUT terraform:  } 2025-07-12 12:59:53.760517 | orchestrator | 12:59:53.760 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-12 12:59:53.760581 | orchestrator | 12:59:53.760 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-12 12:59:53.760619 | orchestrator | 12:59:53.760 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 12:59:53.760812 | orchestrator | 12:59:53.760 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-12 12:59:53.760856 | orchestrator | 12:59:53.760 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.761124 | orchestrator | 12:59:53.760 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 12:59:53.761165 | orchestrator | 12:59:53.761 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.762847 | orchestrator | 12:59:53.762 STDOUT terraform:  } 2025-07-12 12:59:53.762925 | orchestrator | 12:59:53.762 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-12 12:59:53.762995 | orchestrator | 12:59:53.762 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-12 12:59:53.763030 | orchestrator | 12:59:53.763 STDOUT terraform:  + address = (known after apply) 2025-07-12 12:59:53.763063 | orchestrator | 12:59:53.763 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.763098 | orchestrator | 12:59:53.763 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 12:59:53.763130 | orchestrator | 12:59:53.763 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:53.763161 | orchestrator | 12:59:53.763 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 12:59:53.763192 | orchestrator | 12:59:53.763 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.763219 | orchestrator | 12:59:53.763 STDOUT terraform:  + pool = "public" 2025-07-12 12:59:53.763273 | orchestrator | 12:59:53.763 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 12:59:53.763306 | orchestrator | 12:59:53.763 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.763338 | orchestrator | 12:59:53.763 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:53.763377 | orchestrator | 12:59:53.763 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.763398 | orchestrator | 12:59:53.763 STDOUT terraform:  } 2025-07-12 12:59:53.763451 | orchestrator | 12:59:53.763 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-12 12:59:53.763500 | orchestrator | 12:59:53.763 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-12 12:59:53.763543 | orchestrator | 12:59:53.763 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:53.763587 | orchestrator | 12:59:53.763 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.763618 | orchestrator | 12:59:53.763 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 12:59:53.763658 | orchestrator | 12:59:53.763 STDOUT terraform:  + "nova", 2025-07-12 12:59:53.763681 | orchestrator | 12:59:53.763 STDOUT terraform:  ] 2025-07-12 12:59:53.763724 | orchestrator | 12:59:53.763 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 12:59:53.763767 | orchestrator | 12:59:53.763 STDOUT terraform:  + external = (known after apply) 2025-07-12 12:59:53.763810 | orchestrator | 12:59:53.763 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.763853 | orchestrator | 12:59:53.763 STDOUT terraform:  + mtu = (known after apply) 2025-07-12 12:59:53.763899 | orchestrator | 12:59:53.763 STDOUT terraform:  + name = "net-testbed-management" 2025-07-12 12:59:53.763940 | orchestrator | 12:59:53.763 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:53.763985 | orchestrator | 12:59:53.763 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:53.764030 | orchestrator | 12:59:53.763 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.764076 | orchestrator | 12:59:53.764 STDOUT terraform:  + shared = (known after apply) 2025-07-12 12:59:53.764121 | orchestrator | 12:59:53.764 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.764165 | orchestrator | 12:59:53.764 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-12 12:59:53.764204 | orchestrator | 12:59:53.764 STDOUT terraform:  + segments (known after apply) 2025-07-12 12:59:53.764236 | orchestrator | 12:59:53.764 STDOUT terraform:  } 2025-07-12 12:59:53.764290 | orchestrator | 12:59:53.764 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-12 12:59:53.764345 | orchestrator | 12:59:53.764 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-12 12:59:53.764388 | orchestrator | 12:59:53.764 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:53.764431 | orchestrator | 12:59:53.764 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:53.764473 | orchestrator | 12:59:53.764 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:53.764516 | orchestrator | 12:59:53.764 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.764557 | orchestrator | 12:59:53.764 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:53.764605 | orchestrator | 12:59:53.764 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:53.764692 | orchestrator | 12:59:53.764 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:53.764741 | orchestrator | 12:59:53.764 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:53.764783 | orchestrator | 12:59:53.764 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.764824 | orchestrator | 12:59:53.764 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:53.764869 | orchestrator | 12:59:53.764 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:53.764926 | orchestrator | 12:59:53.764 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:53.764968 | orchestrator | 12:59:53.764 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:53.765010 | orchestrator | 12:59:53.764 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.765051 | orchestrator | 12:59:53.765 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:53.765095 | orchestrator | 12:59:53.765 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.765121 | orchestrator | 12:59:53.765 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.765157 | orchestrator | 12:59:53.765 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:53.765178 | orchestrator | 12:59:53.765 STDOUT terraform:  } 2025-07-12 12:59:53.765206 | orchestrator | 12:59:53.765 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.765241 | orchestrator | 12:59:53.765 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:53.765261 | orchestrator | 12:59:53.765 STDOUT terraform:  } 2025-07-12 12:59:53.765291 | orchestrator | 12:59:53.765 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:53.765312 | orchestrator | 12:59:53.765 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:53.765343 | orchestrator | 12:59:53.765 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-12 12:59:53.765380 | orchestrator | 12:59:53.765 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:53.765402 | orchestrator | 12:59:53.765 STDOUT terraform:  } 2025-07-12 12:59:53.765421 | orchestrator | 12:59:53.765 STDOUT terraform:  } 2025-07-12 12:59:53.765473 | orchestrator | 12:59:53.765 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-12 12:59:53.765523 | orchestrator | 12:59:53.765 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:53.765566 | orchestrator | 12:59:53.765 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:53.765607 | orchestrator | 12:59:53.765 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:53.765662 | orchestrator | 12:59:53.765 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:53.765704 | orchestrator | 12:59:53.765 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.765746 | orchestrator | 12:59:53.765 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:53.765794 | orchestrator | 12:59:53.765 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:53.765838 | orchestrator | 12:59:53.765 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:53.765881 | orchestrator | 12:59:53.765 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:53.767955 | orchestrator | 12:59:53.767 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.768001 | orchestrator | 12:59:53.767 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:53.768007 | orchestrator | 12:59:53.767 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:53.768143 | orchestrator | 12:59:53.768 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:53.768222 | orchestrator | 12:59:53.768 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:53.768262 | orchestrator | 12:59:53.768 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.768306 | orchestrator | 12:59:53.768 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:53.768356 | orchestrator | 12:59:53.768 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.768382 | orchestrator | 12:59:53.768 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.768418 | orchestrator | 12:59:53.768 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:53.768437 | orchestrator | 12:59:53.768 STDOUT terraform:  } 2025-07-12 12:59:53.768463 | orchestrator | 12:59:53.768 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.768493 | orchestrator | 12:59:53.768 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:53.768501 | orchestrator | 12:59:53.768 STDOUT terraform:  } 2025-07-12 12:59:53.768524 | orchestrator | 12:59:53.768 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.768557 | orchestrator | 12:59:53.768 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:53.768575 | orchestrator | 12:59:53.768 STDOUT terraform:  } 2025-07-12 12:59:53.768597 | orchestrator | 12:59:53.768 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.768628 | orchestrator | 12:59:53.768 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:53.768657 | orchestrator | 12:59:53.768 STDOUT terraform:  } 2025-07-12 12:59:53.768689 | orchestrator | 12:59:53.768 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:53.768697 | orchestrator | 12:59:53.768 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:53.768728 | orchestrator | 12:59:53.768 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-12 12:59:53.768761 | orchestrator | 12:59:53.768 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:53.768769 | orchestrator | 12:59:53.768 STDOUT terraform:  } 2025-07-12 12:59:53.768776 | orchestrator | 12:59:53.768 STDOUT terraform:  } 2025-07-12 12:59:53.768830 | orchestrator | 12:59:53.768 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-12 12:59:53.768875 | orchestrator | 12:59:53.768 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:53.768911 | orchestrator | 12:59:53.768 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:53.768949 | orchestrator | 12:59:53.768 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:53.768983 | orchestrator | 12:59:53.768 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:53.769025 | orchestrator | 12:59:53.768 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.769063 | orchestrator | 12:59:53.769 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:53.769100 | orchestrator | 12:59:53.769 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:53.769134 | orchestrator | 12:59:53.769 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:53.769174 | orchestrator | 12:59:53.769 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:53.769211 | orchestrator | 12:59:53.769 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.769250 | orchestrator | 12:59:53.769 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:53.769287 | orchestrator | 12:59:53.769 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:53.769325 | orchestrator | 12:59:53.769 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:53.769364 | orchestrator | 12:59:53.769 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:53.769399 | orchestrator | 12:59:53.769 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.769433 | orchestrator | 12:59:53.769 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:53.769467 | orchestrator | 12:59:53.769 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.769492 | orchestrator | 12:59:53.769 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.769520 | orchestrator | 12:59:53.769 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:53.769527 | orchestrator | 12:59:53.769 STDOUT terraform:  } 2025-07-12 12:59:53.769551 | orchestrator | 12:59:53.769 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.769586 | orchestrator | 12:59:53.769 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:53.769594 | orchestrator | 12:59:53.769 STDOUT terraform:  } 2025-07-12 12:59:53.769614 | orchestrator | 12:59:53.769 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.769659 | orchestrator | 12:59:53.769 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:53.769667 | orchestrator | 12:59:53.769 STDOUT terraform:  } 2025-07-12 12:59:53.769686 | orchestrator | 12:59:53.769 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.769716 | orchestrator | 12:59:53.769 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:53.769724 | orchestrator | 12:59:53.769 STDOUT terraform:  } 2025-07-12 12:59:53.769752 | orchestrator | 12:59:53.769 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:53.769759 | orchestrator | 12:59:53.769 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:53.769788 | orchestrator | 12:59:53.769 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-12 12:59:53.769820 | orchestrator | 12:59:53.769 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:53.769827 | orchestrator | 12:59:53.769 STDOUT terraform:  } 2025-07-12 12:59:53.769833 | orchestrator | 12:59:53.769 STDOUT terraform:  } 2025-07-12 12:59:53.769885 | orchestrator | 12:59:53.769 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-12 12:59:53.769929 | orchestrator | 12:59:53.769 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:53.769969 | orchestrator | 12:59:53.769 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:53.770000 | orchestrator | 12:59:53.769 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:53.770055 | orchestrator | 12:59:53.769 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:53.770088 | orchestrator | 12:59:53.770 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.770127 | orchestrator | 12:59:53.770 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:53.770166 | orchestrator | 12:59:53.770 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:53.770200 | orchestrator | 12:59:53.770 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:53.770235 | orchestrator | 12:59:53.770 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:53.770284 | orchestrator | 12:59:53.770 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.770321 | orchestrator | 12:59:53.770 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:53.770357 | orchestrator | 12:59:53.770 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:53.770393 | orchestrator | 12:59:53.770 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:53.770431 | orchestrator | 12:59:53.770 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:53.770465 | orchestrator | 12:59:53.770 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.770500 | orchestrator | 12:59:53.770 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:53.770537 | orchestrator | 12:59:53.770 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.770556 | orchestrator | 12:59:53.770 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.770586 | orchestrator | 12:59:53.770 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:53.770593 | orchestrator | 12:59:53.770 STDOUT terraform:  } 2025-07-12 12:59:53.770616 | orchestrator | 12:59:53.770 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.770677 | orchestrator | 12:59:53.770 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:53.770684 | orchestrator | 12:59:53.770 STDOUT terraform:  } 2025-07-12 12:59:53.770690 | orchestrator | 12:59:53.770 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.770716 | orchestrator | 12:59:53.770 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:53.770731 | orchestrator | 12:59:53.770 STDOUT terraform:  } 2025-07-12 12:59:53.770736 | orchestrator | 12:59:53.770 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.770800 | orchestrator | 12:59:53.770 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:53.770807 | orchestrator | 12:59:53.770 STDOUT terraform:  } 2025-07-12 12:59:53.770813 | orchestrator | 12:59:53.770 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:53.770831 | orchestrator | 12:59:53.770 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:53.770835 | orchestrator | 12:59:53.770 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-12 12:59:53.770859 | orchestrator | 12:59:53.770 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:53.770867 | orchestrator | 12:59:53.770 STDOUT terraform:  } 2025-07-12 12:59:53.770872 | orchestrator | 12:59:53.770 STDOUT terraform:  } 2025-07-12 12:59:53.770923 | orchestrator | 12:59:53.770 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-12 12:59:53.770967 | orchestrator | 12:59:53.770 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:53.771004 | orchestrator | 12:59:53.770 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:53.771040 | orchestrator | 12:59:53.770 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:53.771074 | orchestrator | 12:59:53.771 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:53.771122 | orchestrator | 12:59:53.771 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.771172 | orchestrator | 12:59:53.771 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:53.771230 | orchestrator | 12:59:53.771 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:53.771289 | orchestrator | 12:59:53.771 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:53.771344 | orchestrator | 12:59:53.771 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:53.771385 | orchestrator | 12:59:53.771 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.771419 | orchestrator | 12:59:53.771 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:53.771463 | orchestrator | 12:59:53.771 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:53.771498 | orchestrator | 12:59:53.771 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:53.771533 | orchestrator | 12:59:53.771 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:53.771603 | orchestrator | 12:59:53.771 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.771609 | orchestrator | 12:59:53.771 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:53.771660 | orchestrator | 12:59:53.771 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.771686 | orchestrator | 12:59:53.771 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.771714 | orchestrator | 12:59:53.771 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:53.771729 | orchestrator | 12:59:53.771 STDOUT terraform:  } 2025-07-12 12:59:53.771735 | orchestrator | 12:59:53.771 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.771770 | orchestrator | 12:59:53.771 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:53.771778 | orchestrator | 12:59:53.771 STDOUT terraform:  } 2025-07-12 12:59:53.771804 | orchestrator | 12:59:53.771 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.771832 | orchestrator | 12:59:53.771 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:53.771839 | orchestrator | 12:59:53.771 STDOUT terraform:  } 2025-07-12 12:59:53.771860 | orchestrator | 12:59:53.771 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.771891 | orchestrator | 12:59:53.771 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:53.771900 | orchestrator | 12:59:53.771 STDOUT terraform:  } 2025-07-12 12:59:53.771929 | orchestrator | 12:59:53.771 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:53.771936 | orchestrator | 12:59:53.771 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:53.771965 | orchestrator | 12:59:53.771 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-12 12:59:53.772000 | orchestrator | 12:59:53.771 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:53.772007 | orchestrator | 12:59:53.771 STDOUT terraform:  } 2025-07-12 12:59:53.772013 | orchestrator | 12:59:53.772 STDOUT terraform:  } 2025-07-12 12:59:53.772062 | orchestrator | 12:59:53.772 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-12 12:59:53.772111 | orchestrator | 12:59:53.772 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:53.772144 | orchestrator | 12:59:53.772 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:53.772191 | orchestrator | 12:59:53.772 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:53.772218 | orchestrator | 12:59:53.772 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:53.772253 | orchestrator | 12:59:53.772 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.772290 | orchestrator | 12:59:53.772 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:53.772324 | orchestrator | 12:59:53.772 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:53.772363 | orchestrator | 12:59:53.772 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:53.772399 | orchestrator | 12:59:53.772 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:53.772436 | orchestrator | 12:59:53.772 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.772474 | orchestrator | 12:59:53.772 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:53.772509 | orchestrator | 12:59:53.772 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:53.772549 | orchestrator | 12:59:53.772 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:53.772561 | orchestrator | 12:59:53.772 STDOUT terraform:  + qos_policy 2025-07-12 12:59:53.772628 | orchestrator | 12:59:53.772 STDOUT terraform: _id = (known after apply) 2025-07-12 12:59:53.772680 | orchestrator | 12:59:53.772 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.772714 | orchestrator | 12:59:53.772 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:53.772751 | orchestrator | 12:59:53.772 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.772772 | orchestrator | 12:59:53.772 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.772802 | orchestrator | 12:59:53.772 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:53.772809 | orchestrator | 12:59:53.772 STDOUT terraform:  } 2025-07-12 12:59:53.772832 | orchestrator | 12:59:53.772 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.772862 | orchestrator | 12:59:53.772 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:53.772870 | orchestrator | 12:59:53.772 STDOUT terraform:  } 2025-07-12 12:59:53.772891 | orchestrator | 12:59:53.772 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.772919 | orchestrator | 12:59:53.772 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:53.772925 | orchestrator | 12:59:53.772 STDOUT terraform:  } 2025-07-12 12:59:53.772948 | orchestrator | 12:59:53.772 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.772978 | orchestrator | 12:59:53.772 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:53.772984 | orchestrator | 12:59:53.772 STDOUT terraform:  } 2025-07-12 12:59:53.773013 | orchestrator | 12:59:53.772 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:53.773021 | orchestrator | 12:59:53.773 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:53.773046 | orchestrator | 12:59:53.773 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-12 12:59:53.773076 | orchestrator | 12:59:53.773 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:53.773101 | orchestrator | 12:59:53.773 STDOUT terraform:  } 2025-07-12 12:59:53.773107 | orchestrator | 12:59:53.773 STDOUT terraform:  } 2025-07-12 12:59:53.773155 | orchestrator | 12:59:53.773 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-12 12:59:53.773200 | orchestrator | 12:59:53.773 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 12:59:53.773237 | orchestrator | 12:59:53.773 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:53.773274 | orchestrator | 12:59:53.773 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 12:59:53.773310 | orchestrator | 12:59:53.773 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 12:59:53.773345 | orchestrator | 12:59:53.773 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.773382 | orchestrator | 12:59:53.773 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 12:59:53.773416 | orchestrator | 12:59:53.773 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 12:59:53.773454 | orchestrator | 12:59:53.773 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 12:59:53.773491 | orchestrator | 12:59:53.773 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 12:59:53.773528 | orchestrator | 12:59:53.773 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.773564 | orchestrator | 12:59:53.773 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 12:59:53.773600 | orchestrator | 12:59:53.773 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:53.773650 | orchestrator | 12:59:53.773 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 12:59:53.773686 | orchestrator | 12:59:53.773 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 12:59:53.773722 | orchestrator | 12:59:53.773 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.773755 | orchestrator | 12:59:53.773 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 12:59:53.773792 | orchestrator | 12:59:53.773 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.773813 | orchestrator | 12:59:53.773 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.773841 | orchestrator | 12:59:53.773 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 12:59:53.773848 | orchestrator | 12:59:53.773 STDOUT terraform:  } 2025-07-12 12:59:53.773870 | orchestrator | 12:59:53.773 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.773899 | orchestrator | 12:59:53.773 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 12:59:53.773906 | orchestrator | 12:59:53.773 STDOUT terraform:  } 2025-07-12 12:59:53.773931 | orchestrator | 12:59:53.773 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.773974 | orchestrator | 12:59:53.773 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 12:59:53.773981 | orchestrator | 12:59:53.773 STDOUT terraform:  } 2025-07-12 12:59:53.774004 | orchestrator | 12:59:53.773 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 12:59:53.774694 | orchestrator | 12:59:53.773 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 12:59:53.774738 | orchestrator | 12:59:53.774 STDOUT terraform:  } 2025-07-12 12:59:53.774746 | orchestrator | 12:59:53.774 STDOUT terraform:  + binding (known after apply) 2025-07-12 12:59:53.774765 | orchestrator | 12:59:53.774 STDOUT terraform:  + fixed_ip { 2025-07-12 12:59:53.774792 | orchestrator | 12:59:53.774 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-12 12:59:53.774823 | orchestrator | 12:59:53.774 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:53.774830 | orchestrator | 12:59:53.774 STDOUT terraform:  } 2025-07-12 12:59:53.774850 | orchestrator | 12:59:53.774 STDOUT terraform:  } 2025-07-12 12:59:53.774901 | orchestrator | 12:59:53.774 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-12 12:59:53.774949 | orchestrator | 12:59:53.774 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-12 12:59:53.774969 | orchestrator | 12:59:53.774 STDOUT terraform:  + force_destroy = false 2025-07-12 12:59:53.775000 | orchestrator | 12:59:53.774 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.775030 | orchestrator | 12:59:53.774 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 12:59:53.775060 | orchestrator | 12:59:53.775 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.775090 | orchestrator | 12:59:53.775 STDOUT terraform:  + router_id = (known after apply) 2025-07-12 12:59:53.775121 | orchestrator | 12:59:53.775 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 12:59:53.775129 | orchestrator | 12:59:53.775 STDOUT terraform:  } 2025-07-12 12:59:53.775178 | orchestrator | 12:59:53.775 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-12 12:59:53.775213 | orchestrator | 12:59:53.775 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-12 12:59:53.775249 | orchestrator | 12:59:53.775 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 12:59:53.775288 | orchestrator | 12:59:53.775 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.775314 | orchestrator | 12:59:53.775 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 12:59:53.775321 | orchestrator | 12:59:53.775 STDOUT terraform:  + "nova", 2025-07-12 12:59:53.775339 | orchestrator | 12:59:53.775 STDOUT terraform:  ] 2025-07-12 12:59:53.775377 | orchestrator | 12:59:53.775 STDOUT terraform:  + distributed = (known after apply) 2025-07-12 12:59:53.775411 | orchestrator | 12:59:53.775 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-12 12:59:53.775465 | orchestrator | 12:59:53.775 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-12 12:59:53.775501 | orchestrator | 12:59:53.775 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-12 12:59:53.775536 | orchestrator | 12:59:53.775 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.775567 | orchestrator | 12:59:53.775 STDOUT terraform:  + name = "testbed" 2025-07-12 12:59:53.775603 | orchestrator | 12:59:53.775 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.775673 | orchestrator | 12:59:53.775 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.775708 | orchestrator | 12:59:53.775 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-12 12:59:53.775715 | orchestrator | 12:59:53.775 STDOUT terraform:  } 2025-07-12 12:59:53.775772 | orchestrator | 12:59:53.775 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-12 12:59:53.775826 | orchestrator | 12:59:53.775 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-12 12:59:53.775852 | orchestrator | 12:59:53.775 STDOUT terraform:  + description = "ssh" 2025-07-12 12:59:53.775881 | orchestrator | 12:59:53.775 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:53.775910 | orchestrator | 12:59:53.775 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:53.775949 | orchestrator | 12:59:53.775 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.775974 | orchestrator | 12:59:53.775 STDOUT terraform:  + port_range_max = 22 2025-07-12 12:59:53.776000 | orchestrator | 12:59:53.775 STDOUT terraform:  + port_range_min = 22 2025-07-12 12:59:53.776025 | orchestrator | 12:59:53.775 STDOUT terraform:  + protocol = "tcp" 2025-07-12 12:59:53.776063 | orchestrator | 12:59:53.776 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.776111 | orchestrator | 12:59:53.776 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:53.776144 | orchestrator | 12:59:53.776 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:53.776174 | orchestrator | 12:59:53.776 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:53.776210 | orchestrator | 12:59:53.776 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:53.776246 | orchestrator | 12:59:53.776 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.776255 | orchestrator | 12:59:53.776 STDOUT terraform:  } 2025-07-12 12:59:53.776310 | orchestrator | 12:59:53.776 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-12 12:59:53.776362 | orchestrator | 12:59:53.776 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-12 12:59:53.776393 | orchestrator | 12:59:53.776 STDOUT terraform:  + description = "wireguard" 2025-07-12 12:59:53.776422 | orchestrator | 12:59:53.776 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:53.776448 | orchestrator | 12:59:53.776 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:53.776487 | orchestrator | 12:59:53.776 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.776514 | orchestrator | 12:59:53.776 STDOUT terraform:  + port_range_max = 51820 2025-07-12 12:59:53.776539 | orchestrator | 12:59:53.776 STDOUT terraform:  + port_range_min = 51820 2025-07-12 12:59:53.776565 | orchestrator | 12:59:53.776 STDOUT terraform:  + protocol = "udp" 2025-07-12 12:59:53.776603 | orchestrator | 12:59:53.776 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.776650 | orchestrator | 12:59:53.776 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:53.776685 | orchestrator | 12:59:53.776 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:53.776713 | orchestrator | 12:59:53.776 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:53.776753 | orchestrator | 12:59:53.776 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:53.776784 | orchestrator | 12:59:53.776 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.776791 | orchestrator | 12:59:53.776 STDOUT terraform:  } 2025-07-12 12:59:53.776850 | orchestrator | 12:59:53.776 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-12 12:59:53.776900 | orchestrator | 12:59:53.776 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-12 12:59:53.776930 | orchestrator | 12:59:53.776 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:53.776956 | orchestrator | 12:59:53.776 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:53.776995 | orchestrator | 12:59:53.776 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.777021 | orchestrator | 12:59:53.776 STDOUT terraform:  + protocol = "tcp" 2025-07-12 12:59:53.777057 | orchestrator | 12:59:53.777 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.777092 | orchestrator | 12:59:53.777 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:53.777129 | orchestrator | 12:59:53.777 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:53.777164 | orchestrator | 12:59:53.777 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 12:59:53.777202 | orchestrator | 12:59:53.777 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:53.777237 | orchestrator | 12:59:53.777 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.777244 | orchestrator | 12:59:53.777 STDOUT terraform:  } 2025-07-12 12:59:53.777298 | orchestrator | 12:59:53.777 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-12 12:59:53.777352 | orchestrator | 12:59:53.777 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-12 12:59:53.777386 | orchestrator | 12:59:53.777 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:53.777412 | orchestrator | 12:59:53.777 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:53.777450 | orchestrator | 12:59:53.777 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.777475 | orchestrator | 12:59:53.777 STDOUT terraform:  + protocol = "udp" 2025-07-12 12:59:53.777512 | orchestrator | 12:59:53.777 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.777546 | orchestrator | 12:59:53.777 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:53.777582 | orchestrator | 12:59:53.777 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:53.777618 | orchestrator | 12:59:53.777 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 12:59:53.777664 | orchestrator | 12:59:53.777 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:53.777702 | orchestrator | 12:59:53.777 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.777709 | orchestrator | 12:59:53.777 STDOUT terraform:  } 2025-07-12 12:59:53.777763 | orchestrator | 12:59:53.777 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-12 12:59:53.777814 | orchestrator | 12:59:53.777 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-12 12:59:53.777844 | orchestrator | 12:59:53.777 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:53.777869 | orchestrator | 12:59:53.777 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:53.777908 | orchestrator | 12:59:53.777 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.777934 | orchestrator | 12:59:53.777 STDOUT terraform:  + protocol = "icmp" 2025-07-12 12:59:53.777973 | orchestrator | 12:59:53.777 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.778006 | orchestrator | 12:59:53.777 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:53.778124 | orchestrator | 12:59:53.778 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:53.778131 | orchestrator | 12:59:53.778 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:53.778135 | orchestrator | 12:59:53.778 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:53.778155 | orchestrator | 12:59:53.778 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.778162 | orchestrator | 12:59:53.778 STDOUT terraform:  } 2025-07-12 12:59:53.778214 | orchestrator | 12:59:53.778 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-12 12:59:53.778266 | orchestrator | 12:59:53.778 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-12 12:59:53.778296 | orchestrator | 12:59:53.778 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:53.778321 | orchestrator | 12:59:53.778 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:53.778359 | orchestrator | 12:59:53.778 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.778386 | orchestrator | 12:59:53.778 STDOUT terraform:  + protocol = "tcp" 2025-07-12 12:59:53.778425 | orchestrator | 12:59:53.778 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.778462 | orchestrator | 12:59:53.778 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:53.778499 | orchestrator | 12:59:53.778 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:53.778530 | orchestrator | 12:59:53.778 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:53.778567 | orchestrator | 12:59:53.778 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:53.778603 | orchestrator | 12:59:53.778 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.778610 | orchestrator | 12:59:53.778 STDOUT terraform:  } 2025-07-12 12:59:53.778683 | orchestrator | 12:59:53.778 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-12 12:59:53.778733 | orchestrator | 12:59:53.778 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-12 12:59:53.778762 | orchestrator | 12:59:53.778 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:53.778787 | orchestrator | 12:59:53.778 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:53.779244 | orchestrator | 12:59:53.778 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.779253 | orchestrator | 12:59:53.778 STDOUT terraform:  + protocol = "udp" 2025-07-12 12:59:53.779260 | orchestrator | 12:59:53.778 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.779264 | orchestrator | 12:59:53.778 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:53.779274 | orchestrator | 12:59:53.778 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:53.779278 | orchestrator | 12:59:53.778 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:53.779282 | orchestrator | 12:59:53.778 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:53.779285 | orchestrator | 12:59:53.778 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.779289 | orchestrator | 12:59:53.779 STDOUT terraform:  } 2025-07-12 12:59:53.779293 | orchestrator | 12:59:53.779 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-12 12:59:53.779297 | orchestrator | 12:59:53.779 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-12 12:59:53.779301 | orchestrator | 12:59:53.779 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:53.779305 | orchestrator | 12:59:53.779 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:53.779309 | orchestrator | 12:59:53.779 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.779312 | orchestrator | 12:59:53.779 STDOUT terraform:  + protocol = "icmp" 2025-07-12 12:59:53.779318 | orchestrator | 12:59:53.779 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.779322 | orchestrator | 12:59:53.779 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:53.779326 | orchestrator | 12:59:53.779 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:53.779350 | orchestrator | 12:59:53.779 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:53.779393 | orchestrator | 12:59:53.779 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:53.779423 | orchestrator | 12:59:53.779 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.779430 | orchestrator | 12:59:53.779 STDOUT terraform:  } 2025-07-12 12:59:53.779484 | orchestrator | 12:59:53.779 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-12 12:59:53.779532 | orchestrator | 12:59:53.779 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-12 12:59:53.779558 | orchestrator | 12:59:53.779 STDOUT terraform:  + description = "vrrp" 2025-07-12 12:59:53.779603 | orchestrator | 12:59:53.779 STDOUT terraform:  + direction = "ingress" 2025-07-12 12:59:53.779610 | orchestrator | 12:59:53.779 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 12:59:53.779656 | orchestrator | 12:59:53.779 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.779681 | orchestrator | 12:59:53.779 STDOUT terraform:  + protocol = "112" 2025-07-12 12:59:53.779715 | orchestrator | 12:59:53.779 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.779754 | orchestrator | 12:59:53.779 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 12:59:53.779790 | orchestrator | 12:59:53.779 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 12:59:53.779819 | orchestrator | 12:59:53.779 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 12:59:53.779855 | orchestrator | 12:59:53.779 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 12:59:53.779892 | orchestrator | 12:59:53.779 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.779898 | orchestrator | 12:59:53.779 STDOUT terraform:  } 2025-07-12 12:59:53.779950 | orchestrator | 12:59:53.779 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-12 12:59:53.779999 | orchestrator | 12:59:53.779 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-12 12:59:53.780027 | orchestrator | 12:59:53.779 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.780060 | orchestrator | 12:59:53.780 STDOUT terraform:  + description = "management security group" 2025-07-12 12:59:53.780089 | orchestrator | 12:59:53.780 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.780117 | orchestrator | 12:59:53.780 STDOUT terraform:  + name = "testbed-management" 2025-07-12 12:59:53.780147 | orchestrator | 12:59:53.780 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.780175 | orchestrator | 12:59:53.780 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 12:59:53.781203 | orchestrator | 12:59:53.780 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.781229 | orchestrator | 12:59:53.781 STDOUT terraform:  } 2025-07-12 12:59:53.781278 | orchestrator | 12:59:53.781 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-12 12:59:53.781331 | orchestrator | 12:59:53.781 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-12 12:59:53.781358 | orchestrator | 12:59:53.781 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.781387 | orchestrator | 12:59:53.781 STDOUT terraform:  + description = "node security group" 2025-07-12 12:59:53.781418 | orchestrator | 12:59:53.781 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.781441 | orchestrator | 12:59:53.781 STDOUT terraform:  + name = "testbed-node" 2025-07-12 12:59:53.781472 | orchestrator | 12:59:53.781 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.781499 | orchestrator | 12:59:53.781 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 12:59:53.781528 | orchestrator | 12:59:53.781 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.781535 | orchestrator | 12:59:53.781 STDOUT terraform:  } 2025-07-12 12:59:53.781582 | orchestrator | 12:59:53.781 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-12 12:59:53.781625 | orchestrator | 12:59:53.781 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-12 12:59:53.781671 | orchestrator | 12:59:53.781 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 12:59:53.781701 | orchestrator | 12:59:53.781 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-12 12:59:53.782474 | orchestrator | 12:59:53.781 STDOUT terraform:  + dns_nameservers = [ 2025-07-12 12:59:53.782484 | orchestrator | 12:59:53.781 STDOUT terraform:  + "8.8.8.8", 2025-07-12 12:59:53.782495 | orchestrator | 12:59:53.781 STDOUT terraform:  + "9.9.9.9", 2025-07-12 12:59:53.782500 | orchestrator | 12:59:53.781 STDOUT terraform:  ] 2025-07-12 12:59:53.782504 | orchestrator | 12:59:53.781 STDOUT terraform:  + enable_dhcp = true 2025-07-12 12:59:53.782509 | orchestrator | 12:59:53.781 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-12 12:59:53.782514 | orchestrator | 12:59:53.781 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.782521 | orchestrator | 12:59:53.781 STDOUT terraform:  + ip_version = 4 2025-07-12 12:59:53.782526 | orchestrator | 12:59:53.781 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-12 12:59:53.782531 | orchestrator | 12:59:53.781 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-12 12:59:53.782536 | orchestrator | 12:59:53.781 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-12 12:59:53.782542 | orchestrator | 12:59:53.781 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 12:59:53.782549 | orchestrator | 12:59:53.781 STDOUT terraform:  + no_gateway = false 2025-07-12 12:59:53.782556 | orchestrator | 12:59:53.781 STDOUT terraform:  + region = (known after apply) 2025-07-12 12:59:53.782561 | orchestrator | 12:59:53.781 STDOUT terraform:  + service_types = (known after apply) 2025-07-12 12:59:53.790169 | orchestrator | 12:59:53.782 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 12:59:53.790197 | orchestrator | 12:59:53.790 STDOUT terraform:  + allocation_pool { 2025-07-12 12:59:53.790202 | orchestrator | 12:59:53.790 STDOUT terraform:  + end = "192.168.31.250" 2025-07-12 12:59:53.790209 | orchestrator | 12:59:53.790 STDOUT terraform:  + start = "192.168.31.200" 2025-07-12 12:59:53.790227 | orchestrator | 12:59:53.790 STDOUT terraform:  } 2025-07-12 12:59:53.790247 | orchestrator | 12:59:53.790 STDOUT terraform:  } 2025-07-12 12:59:53.790272 | orchestrator | 12:59:53.790 STDOUT terraform:  # terraform_data.image will be created 2025-07-12 12:59:53.790294 | orchestrator | 12:59:53.790 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-12 12:59:53.790320 | orchestrator | 12:59:53.790 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.790344 | orchestrator | 12:59:53.790 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-12 12:59:53.790370 | orchestrator | 12:59:53.790 STDOUT terraform:  + output = (known after apply) 2025-07-12 12:59:53.790378 | orchestrator | 12:59:53.790 STDOUT terraform:  } 2025-07-12 12:59:53.790420 | orchestrator | 12:59:53.790 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-12 12:59:53.790446 | orchestrator | 12:59:53.790 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-12 12:59:53.790470 | orchestrator | 12:59:53.790 STDOUT terraform:  + id = (known after apply) 2025-07-12 12:59:53.790492 | orchestrator | 12:59:53.790 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-12 12:59:53.790516 | orchestrator | 12:59:53.790 STDOUT terraform:  + output = (known after apply) 2025-07-12 12:59:53.790523 | orchestrator | 12:59:53.790 STDOUT terraform:  } 2025-07-12 12:59:53.790557 | orchestrator | 12:59:53.790 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-12 12:59:53.790571 | orchestrator | 12:59:53.790 STDOUT terraform: Changes to Outputs: 2025-07-12 12:59:53.790593 | orchestrator | 12:59:53.790 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-12 12:59:53.790618 | orchestrator | 12:59:53.790 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 12:59:53.957885 | orchestrator | 12:59:53.956 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-12 12:59:53.958261 | orchestrator | 12:59:53.958 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=4b40a2cb-8b67-afd9-f17d-62b8134c23c2] 2025-07-12 12:59:53.958289 | orchestrator | 12:59:53.958 STDOUT terraform: terraform_data.image: Creating... 2025-07-12 12:59:53.958295 | orchestrator | 12:59:53.958 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=cff21393-1768-f266-cda1-4ae284112d73] 2025-07-12 12:59:54.003006 | orchestrator | 12:59:54.001 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-12 12:59:54.006174 | orchestrator | 12:59:54.006 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-12 12:59:54.010621 | orchestrator | 12:59:54.010 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-12 12:59:54.018458 | orchestrator | 12:59:54.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-12 12:59:54.018515 | orchestrator | 12:59:54.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-12 12:59:54.018535 | orchestrator | 12:59:54.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-12 12:59:54.018765 | orchestrator | 12:59:54.018 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-12 12:59:54.019716 | orchestrator | 12:59:54.019 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-12 12:59:54.020899 | orchestrator | 12:59:54.020 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-12 12:59:54.031767 | orchestrator | 12:59:54.031 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-12 12:59:54.540529 | orchestrator | 12:59:54.539 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-07-12 12:59:54.543140 | orchestrator | 12:59:54.542 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-12 12:59:55.052213 | orchestrator | 12:59:55.051 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=6198630f-fde7-430f-8c8e-eb7322cf3116] 2025-07-12 12:59:55.054196 | orchestrator | 12:59:55.054 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-12 12:59:55.134205 | orchestrator | 12:59:55.134 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-12 12:59:55.137490 | orchestrator | 12:59:55.137 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-12 12:59:55.195459 | orchestrator | 12:59:55.195 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-12 12:59:55.211976 | orchestrator | 12:59:55.211 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-12 12:59:55.219203 | orchestrator | 12:59:55.218 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=3eb4852f7f7d1d657ea220b5e0c68e3ad582d1c4] 2025-07-12 12:59:55.237001 | orchestrator | 12:59:55.236 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-12 12:59:55.245070 | orchestrator | 12:59:55.244 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=7a6773d9a949726829d76fbae7d5997b91c72dc0] 2025-07-12 12:59:55.249869 | orchestrator | 12:59:55.249 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-12 12:59:56.074497 | orchestrator | 12:59:56.074 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=58a54a7b-d756-4a1f-9d6a-077e3fea9127] 2025-07-12 12:59:56.083940 | orchestrator | 12:59:56.083 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-12 12:59:57.671603 | orchestrator | 12:59:57.670 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=910ce96f-e512-4ca8-91f5-259aab453767] 2025-07-12 12:59:57.671756 | orchestrator | 12:59:57.671 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=584411ea-1998-4909-85e4-828e969f2c29] 2025-07-12 12:59:57.680043 | orchestrator | 12:59:57.679 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-12 12:59:57.681246 | orchestrator | 12:59:57.680 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-12 12:59:57.681308 | orchestrator | 12:59:57.680 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=f0941989-f7a4-4554-ad13-0c2066939c98] 2025-07-12 12:59:57.687908 | orchestrator | 12:59:57.687 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-12 12:59:57.697610 | orchestrator | 12:59:57.697 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=ce974423-4fe6-4a7d-9a96-297586e8ac2f] 2025-07-12 12:59:57.702105 | orchestrator | 12:59:57.701 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-12 12:59:57.720480 | orchestrator | 12:59:57.720 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=657fd216-2be4-4730-9631-748e74f421ac] 2025-07-12 12:59:57.724912 | orchestrator | 12:59:57.724 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-12 12:59:57.733208 | orchestrator | 12:59:57.733 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=ae608c05-0dbb-4002-aca8-8a9a246fd830] 2025-07-12 12:59:57.737983 | orchestrator | 12:59:57.737 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-12 12:59:57.738213 | orchestrator | 12:59:57.738 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1] 2025-07-12 12:59:57.744425 | orchestrator | 12:59:57.744 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-12 12:59:57.751888 | orchestrator | 12:59:57.751 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=6157a0e8-ea5c-4f54-9d28-af3024f948aa] 2025-07-12 12:59:57.802671 | orchestrator | 12:59:57.802 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=73295db5-c3fe-42a7-9e6b-efb6b935a094] 2025-07-12 12:59:59.410495 | orchestrator | 12:59:59.410 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=2b2e6119-6454-48e3-8088-8ddf4e9c9719] 2025-07-12 13:00:00.677218 | orchestrator | 13:00:00.676 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=d4b3c65b-2302-4e8c-83bb-f2b74454dece] 2025-07-12 13:00:00.691944 | orchestrator | 13:00:00.686 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-12 13:00:00.692042 | orchestrator | 13:00:00.687 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-12 13:00:00.692053 | orchestrator | 13:00:00.687 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-12 13:00:00.870408 | orchestrator | 13:00:00.869 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=0651a496-5e2f-419d-b4fb-46e3c180752c] 2025-07-12 13:00:00.877959 | orchestrator | 13:00:00.877 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-12 13:00:00.889279 | orchestrator | 13:00:00.889 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-12 13:00:00.934210 | orchestrator | 13:00:00.933 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=d25b6911-c5ac-44d0-b7d0-972d00f8dd52] 2025-07-12 13:00:00.950280 | orchestrator | 13:00:00.949 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-12 13:00:01.020969 | orchestrator | 13:00:01.020 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=6e31f472-fef4-47d2-bbbe-287d607af371] 2025-07-12 13:00:01.034494 | orchestrator | 13:00:01.034 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-12 13:00:01.044726 | orchestrator | 13:00:01.044 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=2f7c4103-b7a1-40b5-b240-8feb842c5041] 2025-07-12 13:00:01.058984 | orchestrator | 13:00:01.058 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-12 13:00:01.078889 | orchestrator | 13:00:01.078 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=ce0b842b-4d26-4f39-a7a5-95396abdad92] 2025-07-12 13:00:01.091317 | orchestrator | 13:00:01.091 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-12 13:00:01.099551 | orchestrator | 13:00:01.099 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=c96a506e-4f4f-4467-9080-6e4031891f49] 2025-07-12 13:00:01.112246 | orchestrator | 13:00:01.112 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-12 13:00:01.113002 | orchestrator | 13:00:01.112 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=e9fe12d2-947e-4f68-8277-3ed645ecdab1] 2025-07-12 13:00:01.118249 | orchestrator | 13:00:01.118 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-12 13:00:01.134169 | orchestrator | 13:00:01.133 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=a40c6dc4-39fa-427b-93ef-c33f20a62f22] 2025-07-12 13:00:01.137129 | orchestrator | 13:00:01.137 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-12 13:00:01.139439 | orchestrator | 13:00:01.139 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=aa156cef-0214-4f9c-bceb-63dc1a9b9f72] 2025-07-12 13:00:01.142800 | orchestrator | 13:00:01.142 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-12 13:00:01.421464 | orchestrator | 13:00:01.421 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=79a21fcf-62e6-4b6e-b13a-7426c7fc949e] 2025-07-12 13:00:01.428003 | orchestrator | 13:00:01.427 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-12 13:00:01.635228 | orchestrator | 13:00:01.634 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=9cc57420-6e93-4348-9c5b-df0407b9b3e6] 2025-07-12 13:00:01.652706 | orchestrator | 13:00:01.652 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-12 13:00:01.746894 | orchestrator | 13:00:01.746 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=a8f39262-6412-4074-909a-27644e9771d3] 2025-07-12 13:00:01.752468 | orchestrator | 13:00:01.752 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-12 13:00:01.772729 | orchestrator | 13:00:01.772 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=05cad993-8896-4a81-a732-8d81d077c29a] 2025-07-12 13:00:01.778518 | orchestrator | 13:00:01.778 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-12 13:00:01.827022 | orchestrator | 13:00:01.826 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=26732a5a-b00f-40a4-b8aa-2a61b883f84c] 2025-07-12 13:00:01.833233 | orchestrator | 13:00:01.832 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=d77145a6-1cad-4cac-b258-621ab12d1fa5] 2025-07-12 13:00:01.835874 | orchestrator | 13:00:01.835 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-12 13:00:01.836511 | orchestrator | 13:00:01.836 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=7c09d0be-de8f-4656-ab31-5618ba8237d0] 2025-07-12 13:00:01.837920 | orchestrator | 13:00:01.837 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-12 13:00:01.842274 | orchestrator | 13:00:01.842 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=6376d7b2-a024-478a-a60e-a0bcdd4a4766] 2025-07-12 13:00:01.851197 | orchestrator | 13:00:01.850 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=f26be7db-5503-4a2e-a753-d53e2ca9e863] 2025-07-12 13:00:01.894985 | orchestrator | 13:00:01.894 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=d681085c-61af-42cd-a7b6-12e7a5e5f74c] 2025-07-12 13:00:01.919092 | orchestrator | 13:00:01.918 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=be03de95-7602-4b55-8420-21870abb153a] 2025-07-12 13:00:02.066811 | orchestrator | 13:00:02.066 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=4cd7750c-acc4-494f-9def-03a12a04d8ed] 2025-07-12 13:00:02.216732 | orchestrator | 13:00:02.216 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=9455462b-c121-41bb-b9a2-3309256bc6d7] 2025-07-12 13:00:02.279090 | orchestrator | 13:00:02.278 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=66096000-683b-400a-8bf4-8a424ae84d9c] 2025-07-12 13:00:02.353664 | orchestrator | 13:00:02.353 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=e4800b4f-ae4f-45cc-87d4-ef4804081ebf] 2025-07-12 13:00:02.991041 | orchestrator | 13:00:02.990 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=9160ab5b-baf4-4e88-8feb-26bde33aded1] 2025-07-12 13:00:03.018112 | orchestrator | 13:00:03.017 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-12 13:00:03.024727 | orchestrator | 13:00:03.024 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-12 13:00:03.030923 | orchestrator | 13:00:03.030 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-12 13:00:03.032363 | orchestrator | 13:00:03.032 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-12 13:00:03.035151 | orchestrator | 13:00:03.035 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-12 13:00:03.051807 | orchestrator | 13:00:03.050 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-12 13:00:03.051831 | orchestrator | 13:00:03.050 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-12 13:00:04.546170 | orchestrator | 13:00:04.545 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=9b166861-633d-46ab-ad8b-95b08fffb535] 2025-07-12 13:00:04.686349 | orchestrator | 13:00:04.560 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-12 13:00:04.686425 | orchestrator | 13:00:04.564 STDOUT terraform: local_file.inventory: Creating... 2025-07-12 13:00:04.686441 | orchestrator | 13:00:04.566 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-12 13:00:04.694130 | orchestrator | 13:00:04.693 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=c1201441522c5fb3ed00252f5b643214316ee4e9] 2025-07-12 13:00:04.694790 | orchestrator | 13:00:04.694 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=1393370d6faf6eebeb535b2454e18b81f47eecd2] 2025-07-12 13:00:05.436123 | orchestrator | 13:00:05.435 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=9b166861-633d-46ab-ad8b-95b08fffb535] 2025-07-12 13:00:13.025960 | orchestrator | 13:00:13.025 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-12 13:00:13.034264 | orchestrator | 13:00:13.033 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-12 13:00:13.034379 | orchestrator | 13:00:13.034 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-12 13:00:13.042217 | orchestrator | 13:00:13.041 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-12 13:00:13.050833 | orchestrator | 13:00:13.050 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-12 13:00:13.051098 | orchestrator | 13:00:13.050 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-12 13:00:23.027383 | orchestrator | 13:00:23.027 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-12 13:00:23.034486 | orchestrator | 13:00:23.034 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-12 13:00:23.034593 | orchestrator | 13:00:23.034 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-12 13:00:23.042611 | orchestrator | 13:00:23.042 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-12 13:00:23.051018 | orchestrator | 13:00:23.050 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-12 13:00:23.051824 | orchestrator | 13:00:23.051 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-12 13:00:23.484416 | orchestrator | 13:00:23.484 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=3d98874d-0ae4-4faa-8303-c39ea732da9c] 2025-07-12 13:00:23.639808 | orchestrator | 13:00:23.639 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=209df802-0c2a-4017-b5cf-e80795503386] 2025-07-12 13:00:33.028484 | orchestrator | 13:00:33.028 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-12 13:00:33.035708 | orchestrator | 13:00:33.035 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-12 13:00:33.042928 | orchestrator | 13:00:33.042 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-12 13:00:33.052300 | orchestrator | 13:00:33.052 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-12 13:00:33.769554 | orchestrator | 13:00:33.769 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=e71eab9e-a829-436f-a60d-f8ad5068d72b] 2025-07-12 13:00:33.841748 | orchestrator | 13:00:33.841 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=d7bbd079-427d-4e8e-9436-3de520475b09] 2025-07-12 13:00:33.854518 | orchestrator | 13:00:33.854 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=405c5e05-93f7-4cc3-94fa-bd24a77d293d] 2025-07-12 13:00:33.867548 | orchestrator | 13:00:33.867 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=ee35bde8-51c6-4544-a130-f2bbf30806dc] 2025-07-12 13:00:33.894591 | orchestrator | 13:00:33.894 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-12 13:00:33.900762 | orchestrator | 13:00:33.900 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4586187033924887570] 2025-07-12 13:00:33.905240 | orchestrator | 13:00:33.905 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-12 13:00:33.905844 | orchestrator | 13:00:33.905 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-12 13:00:33.907264 | orchestrator | 13:00:33.907 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-12 13:00:33.926285 | orchestrator | 13:00:33.926 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-12 13:00:33.926461 | orchestrator | 13:00:33.926 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-12 13:00:33.927172 | orchestrator | 13:00:33.927 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-12 13:00:33.928200 | orchestrator | 13:00:33.928 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-12 13:00:33.930655 | orchestrator | 13:00:33.930 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-12 13:00:33.933540 | orchestrator | 13:00:33.933 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-12 13:00:33.941081 | orchestrator | 13:00:33.940 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-12 13:00:37.454686 | orchestrator | 13:00:37.454 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=209df802-0c2a-4017-b5cf-e80795503386/657fd216-2be4-4730-9631-748e74f421ac] 2025-07-12 13:00:37.473624 | orchestrator | 13:00:37.473 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=3d98874d-0ae4-4faa-8303-c39ea732da9c/164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1] 2025-07-12 13:00:37.487193 | orchestrator | 13:00:37.486 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=405c5e05-93f7-4cc3-94fa-bd24a77d293d/584411ea-1998-4909-85e4-828e969f2c29] 2025-07-12 13:00:37.506113 | orchestrator | 13:00:37.505 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=405c5e05-93f7-4cc3-94fa-bd24a77d293d/ce974423-4fe6-4a7d-9a96-297586e8ac2f] 2025-07-12 13:00:37.508442 | orchestrator | 13:00:37.508 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=209df802-0c2a-4017-b5cf-e80795503386/910ce96f-e512-4ca8-91f5-259aab453767] 2025-07-12 13:00:37.541626 | orchestrator | 13:00:37.541 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=3d98874d-0ae4-4faa-8303-c39ea732da9c/6157a0e8-ea5c-4f54-9d28-af3024f948aa] 2025-07-12 13:00:43.608300 | orchestrator | 13:00:43.607 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=3d98874d-0ae4-4faa-8303-c39ea732da9c/f0941989-f7a4-4554-ad13-0c2066939c98] 2025-07-12 13:00:43.645031 | orchestrator | 13:00:43.644 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=209df802-0c2a-4017-b5cf-e80795503386/ae608c05-0dbb-4002-aca8-8a9a246fd830] 2025-07-12 13:00:43.652576 | orchestrator | 13:00:43.652 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=405c5e05-93f7-4cc3-94fa-bd24a77d293d/73295db5-c3fe-42a7-9e6b-efb6b935a094] 2025-07-12 13:00:43.929935 | orchestrator | 13:00:43.929 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-12 13:00:53.931459 | orchestrator | 13:00:53.931 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-12 13:00:54.398875 | orchestrator | 13:00:54.398 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=55aaf5c3-2b46-4054-9352-cf366912f67a] 2025-07-12 13:00:54.415542 | orchestrator | 13:00:54.415 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-12 13:00:54.415612 | orchestrator | 13:00:54.415 STDOUT terraform: Outputs: 2025-07-12 13:00:54.415634 | orchestrator | 13:00:54.415 STDOUT terraform: manager_address = 2025-07-12 13:00:54.415688 | orchestrator | 13:00:54.415 STDOUT terraform: private_key = 2025-07-12 13:00:54.727491 | orchestrator | ok: Runtime: 0:01:09.830943 2025-07-12 13:00:54.759504 | 2025-07-12 13:00:54.759635 | TASK [Create infrastructure (stable)] 2025-07-12 13:00:55.295319 | orchestrator | skipping: Conditional result was False 2025-07-12 13:00:55.320615 | 2025-07-12 13:00:55.321233 | TASK [Fetch manager address] 2025-07-12 13:00:55.770707 | orchestrator | ok 2025-07-12 13:00:55.780437 | 2025-07-12 13:00:55.780557 | TASK [Set manager_host address] 2025-07-12 13:00:55.860313 | orchestrator | ok 2025-07-12 13:00:55.869629 | 2025-07-12 13:00:55.869778 | LOOP [Update ansible collections] 2025-07-12 13:00:57.631532 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 13:00:57.631897 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 13:00:57.632043 | orchestrator | Starting galaxy collection install process 2025-07-12 13:00:57.632097 | orchestrator | Process install dependency map 2025-07-12 13:00:57.632136 | orchestrator | Starting collection install process 2025-07-12 13:00:57.632171 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-07-12 13:00:57.632211 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-07-12 13:00:57.632251 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-12 13:00:57.632330 | orchestrator | ok: Item: commons Runtime: 0:00:01.439396 2025-07-12 13:00:59.080066 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 13:00:59.080265 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 13:00:59.080318 | orchestrator | Starting galaxy collection install process 2025-07-12 13:00:59.080359 | orchestrator | Process install dependency map 2025-07-12 13:00:59.080397 | orchestrator | Starting collection install process 2025-07-12 13:00:59.080432 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-07-12 13:00:59.080468 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-07-12 13:00:59.080503 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-12 13:00:59.080558 | orchestrator | ok: Item: services Runtime: 0:00:01.170238 2025-07-12 13:00:59.106743 | 2025-07-12 13:00:59.106977 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 13:01:09.647428 | orchestrator | ok 2025-07-12 13:01:09.658711 | 2025-07-12 13:01:09.658882 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 13:02:09.707066 | orchestrator | ok 2025-07-12 13:02:09.718186 | 2025-07-12 13:02:09.718343 | TASK [Fetch manager ssh hostkey] 2025-07-12 13:02:11.294215 | orchestrator | Output suppressed because no_log was given 2025-07-12 13:02:11.310318 | 2025-07-12 13:02:11.310537 | TASK [Get ssh keypair from terraform environment] 2025-07-12 13:02:11.847421 | orchestrator | ok: Runtime: 0:00:00.010696 2025-07-12 13:02:11.861789 | 2025-07-12 13:02:11.861998 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 13:02:11.910886 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-12 13:02:11.922909 | 2025-07-12 13:02:11.923082 | TASK [Run manager part 0] 2025-07-12 13:02:13.647537 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 13:02:13.786174 | orchestrator | 2025-07-12 13:02:13.786240 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-12 13:02:13.786250 | orchestrator | 2025-07-12 13:02:13.786267 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-12 13:02:15.673245 | orchestrator | ok: [testbed-manager] 2025-07-12 13:02:15.673314 | orchestrator | 2025-07-12 13:02:15.673340 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 13:02:15.673350 | orchestrator | 2025-07-12 13:02:15.673360 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:02:17.647561 | orchestrator | ok: [testbed-manager] 2025-07-12 13:02:17.647700 | orchestrator | 2025-07-12 13:02:17.647718 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 13:02:18.373316 | orchestrator | ok: [testbed-manager] 2025-07-12 13:02:18.373408 | orchestrator | 2025-07-12 13:02:18.373419 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 13:02:18.438292 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:18.438332 | orchestrator | 2025-07-12 13:02:18.438343 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-12 13:02:18.470091 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:18.470107 | orchestrator | 2025-07-12 13:02:18.470113 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 13:02:18.497609 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:18.497626 | orchestrator | 2025-07-12 13:02:18.497631 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 13:02:18.533782 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:18.533797 | orchestrator | 2025-07-12 13:02:18.533802 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 13:02:18.561145 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:18.561173 | orchestrator | 2025-07-12 13:02:18.561179 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-12 13:02:18.595323 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:18.595343 | orchestrator | 2025-07-12 13:02:18.595351 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-12 13:02:18.631266 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:02:18.631302 | orchestrator | 2025-07-12 13:02:18.631309 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-12 13:02:19.408776 | orchestrator | changed: [testbed-manager] 2025-07-12 13:02:19.408872 | orchestrator | 2025-07-12 13:02:19.408881 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-12 13:05:43.770136 | orchestrator | changed: [testbed-manager] 2025-07-12 13:05:43.770225 | orchestrator | 2025-07-12 13:05:43.770243 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 13:06:58.494855 | orchestrator | changed: [testbed-manager] 2025-07-12 13:06:58.496531 | orchestrator | 2025-07-12 13:06:58.496553 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 13:07:18.466855 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:18.468303 | orchestrator | 2025-07-12 13:07:18.468341 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 13:07:27.486535 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:27.486629 | orchestrator | 2025-07-12 13:07:27.486645 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 13:07:27.540806 | orchestrator | ok: [testbed-manager] 2025-07-12 13:07:27.540913 | orchestrator | 2025-07-12 13:07:27.540929 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-12 13:07:28.334247 | orchestrator | ok: [testbed-manager] 2025-07-12 13:07:28.334336 | orchestrator | 2025-07-12 13:07:28.334356 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-12 13:07:29.082833 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:29.082926 | orchestrator | 2025-07-12 13:07:29.082944 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-12 13:07:35.609994 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:35.611385 | orchestrator | 2025-07-12 13:07:35.611427 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-12 13:07:41.479613 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:41.479697 | orchestrator | 2025-07-12 13:07:41.479715 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-12 13:07:44.267196 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:44.267282 | orchestrator | 2025-07-12 13:07:44.267298 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-12 13:07:46.108145 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:46.109112 | orchestrator | 2025-07-12 13:07:46.109178 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-12 13:07:47.270389 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 13:07:47.270472 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 13:07:47.270489 | orchestrator | 2025-07-12 13:07:47.270503 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-12 13:07:47.305468 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 13:07:47.305522 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 13:07:47.305529 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 13:07:47.305534 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 13:07:53.356280 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 13:07:53.356332 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 13:07:53.356338 | orchestrator | 2025-07-12 13:07:53.356344 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-12 13:07:53.943836 | orchestrator | changed: [testbed-manager] 2025-07-12 13:07:53.943917 | orchestrator | 2025-07-12 13:07:53.943933 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-12 13:09:15.524634 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-12 13:09:15.524708 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-12 13:09:15.524726 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-12 13:09:15.524738 | orchestrator | 2025-07-12 13:09:15.524770 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-12 13:09:17.990352 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-12 13:09:17.990428 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-12 13:09:17.990439 | orchestrator | 2025-07-12 13:09:17.990449 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-12 13:09:17.990458 | orchestrator | 2025-07-12 13:09:17.990466 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:09:19.476238 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:19.476451 | orchestrator | 2025-07-12 13:09:19.476473 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 13:09:19.522404 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:19.522460 | orchestrator | 2025-07-12 13:09:19.522468 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 13:09:19.597488 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:19.597576 | orchestrator | 2025-07-12 13:09:19.597592 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 13:09:20.409923 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:20.410008 | orchestrator | 2025-07-12 13:09:20.410126 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 13:09:21.160270 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:21.160357 | orchestrator | 2025-07-12 13:09:21.160374 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 13:09:22.542791 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-12 13:09:22.542900 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-12 13:09:22.542917 | orchestrator | 2025-07-12 13:09:22.542947 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 13:09:23.978870 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:23.978983 | orchestrator | 2025-07-12 13:09:23.979000 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 13:09:25.807463 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:09:25.807549 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-12 13:09:25.807563 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:09:25.807574 | orchestrator | 2025-07-12 13:09:25.807588 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 13:09:25.865542 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:25.865628 | orchestrator | 2025-07-12 13:09:25.865644 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 13:09:26.447255 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:26.447340 | orchestrator | 2025-07-12 13:09:26.447358 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 13:09:26.524651 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:26.524723 | orchestrator | 2025-07-12 13:09:26.524737 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 13:09:27.418743 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:09:27.418851 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:27.418879 | orchestrator | 2025-07-12 13:09:27.418900 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 13:09:27.462967 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:27.463026 | orchestrator | 2025-07-12 13:09:27.463039 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 13:09:27.497581 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:27.497637 | orchestrator | 2025-07-12 13:09:27.497647 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 13:09:27.536263 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:27.536346 | orchestrator | 2025-07-12 13:09:27.536361 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 13:09:27.589036 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:27.589097 | orchestrator | 2025-07-12 13:09:27.589106 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 13:09:28.318873 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:28.319075 | orchestrator | 2025-07-12 13:09:28.319099 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 13:09:28.319113 | orchestrator | 2025-07-12 13:09:28.319124 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:09:29.756573 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:29.756665 | orchestrator | 2025-07-12 13:09:29.756681 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-12 13:09:30.720945 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:30.721026 | orchestrator | 2025-07-12 13:09:30.721044 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:09:30.721059 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-12 13:09:30.721073 | orchestrator | 2025-07-12 13:09:31.238232 | orchestrator | ok: Runtime: 0:07:18.573801 2025-07-12 13:09:31.255927 | 2025-07-12 13:09:31.256144 | TASK [Point out that the log in on the manager is now possible] 2025-07-12 13:09:31.298980 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-12 13:09:31.310144 | 2025-07-12 13:09:31.310290 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 13:09:31.354371 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-12 13:09:31.363874 | 2025-07-12 13:09:31.364098 | TASK [Run manager part 1 + 2] 2025-07-12 13:09:32.319507 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 13:09:32.372340 | orchestrator | 2025-07-12 13:09:32.372428 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-12 13:09:32.372446 | orchestrator | 2025-07-12 13:09:32.372476 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:09:35.334085 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:35.334194 | orchestrator | 2025-07-12 13:09:35.334218 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 13:09:35.367905 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:35.367947 | orchestrator | 2025-07-12 13:09:35.367955 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 13:09:35.401531 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:35.401576 | orchestrator | 2025-07-12 13:09:35.401584 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 13:09:35.441572 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:35.441624 | orchestrator | 2025-07-12 13:09:35.441635 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 13:09:35.516641 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:35.516693 | orchestrator | 2025-07-12 13:09:35.516703 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 13:09:35.576517 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:35.576565 | orchestrator | 2025-07-12 13:09:35.576574 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 13:09:35.619271 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-12 13:09:35.619400 | orchestrator | 2025-07-12 13:09:35.619409 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 13:09:36.334030 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:36.334087 | orchestrator | 2025-07-12 13:09:36.334098 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 13:09:36.389092 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:36.389141 | orchestrator | 2025-07-12 13:09:36.389148 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 13:09:37.745456 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:37.745516 | orchestrator | 2025-07-12 13:09:37.745527 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 13:09:38.331126 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:38.331297 | orchestrator | 2025-07-12 13:09:38.331310 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 13:09:39.506579 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:39.506627 | orchestrator | 2025-07-12 13:09:39.506638 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 13:09:52.886215 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:52.886290 | orchestrator | 2025-07-12 13:09:52.886306 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 13:09:53.564200 | orchestrator | ok: [testbed-manager] 2025-07-12 13:09:53.564285 | orchestrator | 2025-07-12 13:09:53.564306 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 13:09:53.609886 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:09:53.609962 | orchestrator | 2025-07-12 13:09:53.609980 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-12 13:09:54.620787 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:54.620842 | orchestrator | 2025-07-12 13:09:54.620885 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-12 13:09:55.602515 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:55.602599 | orchestrator | 2025-07-12 13:09:55.602616 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-12 13:09:56.190125 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:56.190208 | orchestrator | 2025-07-12 13:09:56.190225 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-12 13:09:56.229521 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 13:09:56.229621 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 13:09:56.229636 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 13:09:56.229649 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 13:09:58.454323 | orchestrator | changed: [testbed-manager] 2025-07-12 13:09:58.454422 | orchestrator | 2025-07-12 13:09:58.454440 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-12 13:10:07.481697 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-12 13:10:07.481797 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-12 13:10:07.481815 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-12 13:10:07.481827 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-12 13:10:07.481849 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-12 13:10:07.481904 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-12 13:10:07.481916 | orchestrator | 2025-07-12 13:10:07.481928 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-12 13:10:08.534314 | orchestrator | changed: [testbed-manager] 2025-07-12 13:10:08.534387 | orchestrator | 2025-07-12 13:10:08.534404 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-12 13:10:08.573288 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:10:08.573321 | orchestrator | 2025-07-12 13:10:08.573329 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-12 13:10:11.683141 | orchestrator | changed: [testbed-manager] 2025-07-12 13:10:11.683177 | orchestrator | 2025-07-12 13:10:11.683186 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-12 13:10:11.724284 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:10:11.724349 | orchestrator | 2025-07-12 13:10:11.724364 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-12 13:11:47.703523 | orchestrator | changed: [testbed-manager] 2025-07-12 13:11:47.703616 | orchestrator | 2025-07-12 13:11:47.703636 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 13:11:48.846530 | orchestrator | ok: [testbed-manager] 2025-07-12 13:11:48.846607 | orchestrator | 2025-07-12 13:11:48.846624 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:11:48.846638 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-12 13:11:48.846650 | orchestrator | 2025-07-12 13:11:49.000394 | orchestrator | ok: Runtime: 0:02:17.208963 2025-07-12 13:11:49.012160 | 2025-07-12 13:11:49.012292 | TASK [Reboot manager] 2025-07-12 13:11:50.548370 | orchestrator | ok: Runtime: 0:00:00.997538 2025-07-12 13:11:50.564703 | 2025-07-12 13:11:50.564888 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 13:12:05.370207 | orchestrator | ok 2025-07-12 13:12:05.380768 | 2025-07-12 13:12:05.380878 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 13:13:05.416193 | orchestrator | ok 2025-07-12 13:13:05.427258 | 2025-07-12 13:13:05.427399 | TASK [Deploy manager + bootstrap nodes] 2025-07-12 13:13:07.994081 | orchestrator | 2025-07-12 13:13:07.994278 | orchestrator | # DEPLOY MANAGER 2025-07-12 13:13:07.994302 | orchestrator | 2025-07-12 13:13:07.994317 | orchestrator | + set -e 2025-07-12 13:13:07.994330 | orchestrator | + echo 2025-07-12 13:13:07.994344 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-12 13:13:07.994361 | orchestrator | + echo 2025-07-12 13:13:07.994410 | orchestrator | + cat /opt/manager-vars.sh 2025-07-12 13:13:07.996632 | orchestrator | export NUMBER_OF_NODES=6 2025-07-12 13:13:07.996656 | orchestrator | 2025-07-12 13:13:07.996668 | orchestrator | export CEPH_VERSION=reef 2025-07-12 13:13:07.996682 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-12 13:13:07.996694 | orchestrator | export MANAGER_VERSION=latest 2025-07-12 13:13:07.996716 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-12 13:13:07.996727 | orchestrator | 2025-07-12 13:13:07.996745 | orchestrator | export ARA=false 2025-07-12 13:13:07.996757 | orchestrator | export DEPLOY_MODE=manager 2025-07-12 13:13:07.996775 | orchestrator | export TEMPEST=false 2025-07-12 13:13:07.996787 | orchestrator | export IS_ZUUL=true 2025-07-12 13:13:07.996798 | orchestrator | 2025-07-12 13:13:07.996816 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 13:13:07.996828 | orchestrator | export EXTERNAL_API=false 2025-07-12 13:13:07.996839 | orchestrator | 2025-07-12 13:13:07.996850 | orchestrator | export IMAGE_USER=ubuntu 2025-07-12 13:13:07.996864 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-12 13:13:07.996875 | orchestrator | 2025-07-12 13:13:07.996886 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-12 13:13:07.997067 | orchestrator | 2025-07-12 13:13:07.997084 | orchestrator | + echo 2025-07-12 13:13:07.997096 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 13:13:07.997983 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 13:13:07.998001 | orchestrator | ++ INTERACTIVE=false 2025-07-12 13:13:07.998013 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 13:13:07.998066 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 13:13:07.998238 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 13:13:07.998253 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 13:13:07.998264 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 13:13:07.998309 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 13:13:07.998322 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 13:13:07.998333 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 13:13:07.998344 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 13:13:07.998355 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 13:13:07.998366 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 13:13:07.998377 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 13:13:07.998396 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 13:13:07.998412 | orchestrator | ++ export ARA=false 2025-07-12 13:13:07.998423 | orchestrator | ++ ARA=false 2025-07-12 13:13:07.998434 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 13:13:07.998448 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 13:13:07.998459 | orchestrator | ++ export TEMPEST=false 2025-07-12 13:13:07.998470 | orchestrator | ++ TEMPEST=false 2025-07-12 13:13:07.998481 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 13:13:07.998492 | orchestrator | ++ IS_ZUUL=true 2025-07-12 13:13:07.998503 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 13:13:07.998514 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 13:13:07.998525 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 13:13:07.998539 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 13:13:07.998550 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 13:13:07.998561 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 13:13:07.998575 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 13:13:07.998586 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 13:13:07.998706 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 13:13:07.998720 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 13:13:07.998852 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-12 13:13:08.066224 | orchestrator | + docker version 2025-07-12 13:13:08.359998 | orchestrator | Client: Docker Engine - Community 2025-07-12 13:13:08.360121 | orchestrator | Version: 27.5.1 2025-07-12 13:13:08.360141 | orchestrator | API version: 1.47 2025-07-12 13:13:08.360153 | orchestrator | Go version: go1.22.11 2025-07-12 13:13:08.360165 | orchestrator | Git commit: 9f9e405 2025-07-12 13:13:08.360176 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 13:13:08.360188 | orchestrator | OS/Arch: linux/amd64 2025-07-12 13:13:08.360199 | orchestrator | Context: default 2025-07-12 13:13:08.360210 | orchestrator | 2025-07-12 13:13:08.360222 | orchestrator | Server: Docker Engine - Community 2025-07-12 13:13:08.360233 | orchestrator | Engine: 2025-07-12 13:13:08.360245 | orchestrator | Version: 27.5.1 2025-07-12 13:13:08.360256 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-12 13:13:08.360298 | orchestrator | Go version: go1.22.11 2025-07-12 13:13:08.360309 | orchestrator | Git commit: 4c9b3b0 2025-07-12 13:13:08.360320 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 13:13:08.360331 | orchestrator | OS/Arch: linux/amd64 2025-07-12 13:13:08.360342 | orchestrator | Experimental: false 2025-07-12 13:13:08.360353 | orchestrator | containerd: 2025-07-12 13:13:08.360364 | orchestrator | Version: 1.7.27 2025-07-12 13:13:08.360375 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-12 13:13:08.360386 | orchestrator | runc: 2025-07-12 13:13:08.360397 | orchestrator | Version: 1.2.5 2025-07-12 13:13:08.360408 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-12 13:13:08.360419 | orchestrator | docker-init: 2025-07-12 13:13:08.360429 | orchestrator | Version: 0.19.0 2025-07-12 13:13:08.360441 | orchestrator | GitCommit: de40ad0 2025-07-12 13:13:08.362163 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-12 13:13:08.370827 | orchestrator | + set -e 2025-07-12 13:13:08.370850 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 13:13:08.370864 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 13:13:08.370882 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 13:13:08.370915 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 13:13:08.370927 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 13:13:08.370938 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 13:13:08.370949 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 13:13:08.370960 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 13:13:08.370999 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 13:13:08.371010 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 13:13:08.371021 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 13:13:08.371033 | orchestrator | ++ export ARA=false 2025-07-12 13:13:08.371044 | orchestrator | ++ ARA=false 2025-07-12 13:13:08.371054 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 13:13:08.371065 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 13:13:08.371076 | orchestrator | ++ export TEMPEST=false 2025-07-12 13:13:08.371087 | orchestrator | ++ TEMPEST=false 2025-07-12 13:13:08.371097 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 13:13:08.371108 | orchestrator | ++ IS_ZUUL=true 2025-07-12 13:13:08.371119 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 13:13:08.371130 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 13:13:08.371141 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 13:13:08.371151 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 13:13:08.371162 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 13:13:08.371177 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 13:13:08.371193 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 13:13:08.371204 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 13:13:08.371223 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 13:13:08.371234 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 13:13:08.371245 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 13:13:08.371256 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 13:13:08.371266 | orchestrator | ++ INTERACTIVE=false 2025-07-12 13:13:08.371277 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 13:13:08.371292 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 13:13:08.371308 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 13:13:08.371319 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 13:13:08.371330 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-12 13:13:08.378652 | orchestrator | + set -e 2025-07-12 13:13:08.379330 | orchestrator | + VERSION=reef 2025-07-12 13:13:08.379643 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-12 13:13:08.385821 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-12 13:13:08.385865 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-12 13:13:08.391914 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-12 13:13:08.398344 | orchestrator | + set -e 2025-07-12 13:13:08.398368 | orchestrator | + VERSION=2024.2 2025-07-12 13:13:08.399464 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-12 13:13:08.403448 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-12 13:13:08.403476 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-12 13:13:08.410108 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-12 13:13:08.411248 | orchestrator | ++ semver latest 7.0.0 2025-07-12 13:13:08.478626 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-12 13:13:08.478709 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 13:13:08.478722 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-12 13:13:08.478733 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-12 13:13:08.565394 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 13:13:08.567612 | orchestrator | + source /opt/venv/bin/activate 2025-07-12 13:13:08.569093 | orchestrator | ++ deactivate nondestructive 2025-07-12 13:13:08.569126 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:13:08.569139 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:13:08.569153 | orchestrator | ++ hash -r 2025-07-12 13:13:08.569311 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:13:08.569334 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-12 13:13:08.569346 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-12 13:13:08.569361 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-12 13:13:08.569534 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-12 13:13:08.569559 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-12 13:13:08.569571 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-12 13:13:08.569586 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-12 13:13:08.569606 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 13:13:08.569708 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 13:13:08.569734 | orchestrator | ++ export PATH 2025-07-12 13:13:08.569816 | orchestrator | ++ '[' -n '' ']' 2025-07-12 13:13:08.569865 | orchestrator | ++ '[' -z '' ']' 2025-07-12 13:13:08.569933 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-12 13:13:08.570075 | orchestrator | ++ PS1='(venv) ' 2025-07-12 13:13:08.570304 | orchestrator | ++ export PS1 2025-07-12 13:13:08.570404 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-12 13:13:08.570421 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-12 13:13:08.570434 | orchestrator | ++ hash -r 2025-07-12 13:13:08.570468 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-12 13:13:09.889395 | orchestrator | 2025-07-12 13:13:09.889503 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-12 13:13:09.889518 | orchestrator | 2025-07-12 13:13:09.889530 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 13:13:10.494879 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:10.495031 | orchestrator | 2025-07-12 13:13:10.495049 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 13:13:11.575067 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:11.575201 | orchestrator | 2025-07-12 13:13:11.575220 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-12 13:13:11.575234 | orchestrator | 2025-07-12 13:13:11.575245 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:13:14.105609 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:14.105709 | orchestrator | 2025-07-12 13:13:14.105722 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-12 13:13:14.155637 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:14.155735 | orchestrator | 2025-07-12 13:13:14.155752 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-12 13:13:14.643600 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:14.643722 | orchestrator | 2025-07-12 13:13:14.643739 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-12 13:13:14.682851 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:14.682911 | orchestrator | 2025-07-12 13:13:14.682927 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 13:13:15.044686 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:15.044776 | orchestrator | 2025-07-12 13:13:15.044791 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-12 13:13:15.103473 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:15.103555 | orchestrator | 2025-07-12 13:13:15.103569 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-12 13:13:15.466810 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:15.466913 | orchestrator | 2025-07-12 13:13:15.466927 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-12 13:13:15.580682 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:15.580844 | orchestrator | 2025-07-12 13:13:15.580864 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-12 13:13:15.580877 | orchestrator | 2025-07-12 13:13:15.580890 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:13:17.469611 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:17.469721 | orchestrator | 2025-07-12 13:13:17.469738 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-12 13:13:17.583491 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-12 13:13:17.583590 | orchestrator | 2025-07-12 13:13:17.583604 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-12 13:13:17.644033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-12 13:13:17.644119 | orchestrator | 2025-07-12 13:13:17.644133 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-12 13:13:18.765441 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-12 13:13:18.765549 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-12 13:13:18.765565 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-12 13:13:18.765577 | orchestrator | 2025-07-12 13:13:18.765590 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-12 13:13:20.721269 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-12 13:13:20.721373 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-12 13:13:20.721391 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-12 13:13:20.721404 | orchestrator | 2025-07-12 13:13:20.721417 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-12 13:13:21.433554 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:13:21.433658 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:21.433674 | orchestrator | 2025-07-12 13:13:21.433688 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-12 13:13:22.113483 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:13:22.113584 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:22.113601 | orchestrator | 2025-07-12 13:13:22.113614 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-12 13:13:22.178466 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:22.178550 | orchestrator | 2025-07-12 13:13:22.178566 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-12 13:13:22.581206 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:22.581305 | orchestrator | 2025-07-12 13:13:22.581319 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-12 13:13:22.663577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-12 13:13:22.663673 | orchestrator | 2025-07-12 13:13:22.663687 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-12 13:13:23.817411 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:23.817522 | orchestrator | 2025-07-12 13:13:23.817540 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-12 13:13:24.698591 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:24.698688 | orchestrator | 2025-07-12 13:13:24.698704 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-12 13:13:37.019369 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:37.019483 | orchestrator | 2025-07-12 13:13:37.019500 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-12 13:13:37.071374 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:37.071472 | orchestrator | 2025-07-12 13:13:37.071497 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-12 13:13:37.071513 | orchestrator | 2025-07-12 13:13:37.071525 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:13:38.974766 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:38.974873 | orchestrator | 2025-07-12 13:13:38.974918 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-12 13:13:39.089400 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-12 13:13:39.089507 | orchestrator | 2025-07-12 13:13:39.089524 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-12 13:13:39.160454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:13:39.160535 | orchestrator | 2025-07-12 13:13:39.160548 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-12 13:13:42.046604 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:42.046716 | orchestrator | 2025-07-12 13:13:42.046734 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-12 13:13:42.092848 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:42.092917 | orchestrator | 2025-07-12 13:13:42.092934 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-12 13:13:42.229056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-12 13:13:42.229142 | orchestrator | 2025-07-12 13:13:42.229155 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-12 13:13:45.269200 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-12 13:13:45.269305 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-12 13:13:45.269320 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-12 13:13:45.269332 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-12 13:13:45.269344 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-12 13:13:45.269355 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-12 13:13:45.269366 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-12 13:13:45.269377 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-12 13:13:45.269388 | orchestrator | 2025-07-12 13:13:45.269402 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-12 13:13:45.979577 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:45.979673 | orchestrator | 2025-07-12 13:13:45.979688 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-12 13:13:46.650428 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:46.650545 | orchestrator | 2025-07-12 13:13:46.650567 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-12 13:13:46.739919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-12 13:13:46.740010 | orchestrator | 2025-07-12 13:13:46.740018 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-12 13:13:48.008204 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-12 13:13:48.008326 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-12 13:13:48.008350 | orchestrator | 2025-07-12 13:13:48.008364 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-12 13:13:48.672350 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:48.672454 | orchestrator | 2025-07-12 13:13:48.672470 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-12 13:13:48.737055 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:48.737145 | orchestrator | 2025-07-12 13:13:48.737159 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-12 13:13:48.796279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-12 13:13:48.796326 | orchestrator | 2025-07-12 13:13:48.796340 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-12 13:13:50.280872 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:13:50.280980 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:13:50.281028 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:50.281041 | orchestrator | 2025-07-12 13:13:50.281053 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-12 13:13:50.929713 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:50.929813 | orchestrator | 2025-07-12 13:13:50.929828 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-12 13:13:50.998362 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:50.998457 | orchestrator | 2025-07-12 13:13:50.998479 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-12 13:13:51.120825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-12 13:13:51.120924 | orchestrator | 2025-07-12 13:13:51.120938 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-12 13:13:51.671071 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:51.671177 | orchestrator | 2025-07-12 13:13:51.671193 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-12 13:13:52.091523 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:52.091621 | orchestrator | 2025-07-12 13:13:52.091636 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-12 13:13:53.398529 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-12 13:13:53.398636 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-12 13:13:53.398652 | orchestrator | 2025-07-12 13:13:53.398665 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-12 13:13:54.038550 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:54.038662 | orchestrator | 2025-07-12 13:13:54.038682 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-12 13:13:54.468754 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:54.468859 | orchestrator | 2025-07-12 13:13:54.468876 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-12 13:13:54.846654 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:54.846764 | orchestrator | 2025-07-12 13:13:54.846781 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-12 13:13:54.895504 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:13:54.895585 | orchestrator | 2025-07-12 13:13:54.895600 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-12 13:13:54.986265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-12 13:13:54.986367 | orchestrator | 2025-07-12 13:13:54.986384 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-12 13:13:55.040781 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:55.040865 | orchestrator | 2025-07-12 13:13:55.040880 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-12 13:13:57.137566 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-12 13:13:57.137745 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-12 13:13:57.137763 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-12 13:13:57.137776 | orchestrator | 2025-07-12 13:13:57.137789 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-12 13:13:57.883620 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:57.883726 | orchestrator | 2025-07-12 13:13:57.883742 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-12 13:13:58.631654 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:58.631756 | orchestrator | 2025-07-12 13:13:58.631773 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-12 13:13:59.352396 | orchestrator | changed: [testbed-manager] 2025-07-12 13:13:59.352496 | orchestrator | 2025-07-12 13:13:59.352513 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-12 13:13:59.421601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-12 13:13:59.421663 | orchestrator | 2025-07-12 13:13:59.421679 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-12 13:13:59.468132 | orchestrator | ok: [testbed-manager] 2025-07-12 13:13:59.468192 | orchestrator | 2025-07-12 13:13:59.468206 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-12 13:14:00.208697 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-12 13:14:00.208798 | orchestrator | 2025-07-12 13:14:00.208813 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-12 13:14:00.302194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-12 13:14:00.302300 | orchestrator | 2025-07-12 13:14:00.302316 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-12 13:14:01.031387 | orchestrator | changed: [testbed-manager] 2025-07-12 13:14:01.031484 | orchestrator | 2025-07-12 13:14:01.031498 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-12 13:14:01.658988 | orchestrator | ok: [testbed-manager] 2025-07-12 13:14:01.659163 | orchestrator | 2025-07-12 13:14:01.659181 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-12 13:14:01.719474 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:14:01.719539 | orchestrator | 2025-07-12 13:14:01.719553 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-12 13:14:01.777733 | orchestrator | ok: [testbed-manager] 2025-07-12 13:14:01.777799 | orchestrator | 2025-07-12 13:14:01.777812 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-12 13:14:02.660506 | orchestrator | changed: [testbed-manager] 2025-07-12 13:14:02.660659 | orchestrator | 2025-07-12 13:14:02.660685 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-12 13:15:12.156971 | orchestrator | changed: [testbed-manager] 2025-07-12 13:15:12.157140 | orchestrator | 2025-07-12 13:15:12.157159 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-12 13:15:13.153553 | orchestrator | ok: [testbed-manager] 2025-07-12 13:15:13.153667 | orchestrator | 2025-07-12 13:15:13.153683 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-12 13:15:13.212351 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:15:13.212440 | orchestrator | 2025-07-12 13:15:13.212455 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-12 13:15:16.103721 | orchestrator | changed: [testbed-manager] 2025-07-12 13:15:16.103833 | orchestrator | 2025-07-12 13:15:16.103853 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-12 13:15:16.149412 | orchestrator | ok: [testbed-manager] 2025-07-12 13:15:16.149505 | orchestrator | 2025-07-12 13:15:16.149519 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 13:15:16.149531 | orchestrator | 2025-07-12 13:15:16.149542 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-12 13:15:16.189924 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:15:16.190007 | orchestrator | 2025-07-12 13:15:16.190114 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-12 13:16:16.241694 | orchestrator | Pausing for 60 seconds 2025-07-12 13:16:16.241814 | orchestrator | changed: [testbed-manager] 2025-07-12 13:16:16.241832 | orchestrator | 2025-07-12 13:16:16.241846 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-12 13:16:20.317500 | orchestrator | changed: [testbed-manager] 2025-07-12 13:16:20.317638 | orchestrator | 2025-07-12 13:16:20.317665 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-12 13:17:02.108566 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-12 13:17:02.108689 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-12 13:17:02.108705 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:02.108719 | orchestrator | 2025-07-12 13:17:02.108731 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-12 13:17:11.851162 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:11.851305 | orchestrator | 2025-07-12 13:17:11.851339 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-12 13:17:11.940461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-12 13:17:11.940589 | orchestrator | 2025-07-12 13:17:11.940605 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 13:17:11.940618 | orchestrator | 2025-07-12 13:17:11.940630 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-12 13:17:11.981237 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:17:11.981321 | orchestrator | 2025-07-12 13:17:11.981335 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:17:11.981348 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-12 13:17:11.981359 | orchestrator | 2025-07-12 13:17:12.119602 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 13:17:12.119689 | orchestrator | + deactivate 2025-07-12 13:17:12.119703 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-12 13:17:12.119716 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 13:17:12.119726 | orchestrator | + export PATH 2025-07-12 13:17:12.119738 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-12 13:17:12.119749 | orchestrator | + '[' -n '' ']' 2025-07-12 13:17:12.119760 | orchestrator | + hash -r 2025-07-12 13:17:12.119771 | orchestrator | + '[' -n '' ']' 2025-07-12 13:17:12.119781 | orchestrator | + unset VIRTUAL_ENV 2025-07-12 13:17:12.119792 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-12 13:17:12.119825 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-12 13:17:12.119836 | orchestrator | + unset -f deactivate 2025-07-12 13:17:12.119848 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-12 13:17:12.125260 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 13:17:12.125287 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 13:17:12.125298 | orchestrator | + local max_attempts=60 2025-07-12 13:17:12.125309 | orchestrator | + local name=ceph-ansible 2025-07-12 13:17:12.125320 | orchestrator | + local attempt_num=1 2025-07-12 13:17:12.125907 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:17:12.163841 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:17:12.163898 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 13:17:12.163911 | orchestrator | + local max_attempts=60 2025-07-12 13:17:12.163922 | orchestrator | + local name=kolla-ansible 2025-07-12 13:17:12.163933 | orchestrator | + local attempt_num=1 2025-07-12 13:17:12.164600 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 13:17:12.198503 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:17:12.198567 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 13:17:12.198579 | orchestrator | + local max_attempts=60 2025-07-12 13:17:12.198590 | orchestrator | + local name=osism-ansible 2025-07-12 13:17:12.198601 | orchestrator | + local attempt_num=1 2025-07-12 13:17:12.199844 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 13:17:12.238472 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:17:12.238521 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 13:17:12.238535 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 13:17:12.992555 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-12 13:17:13.212917 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-12 13:17:13.213011 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-12 13:17:13.213026 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-12 13:17:13.213038 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-12 13:17:13.213050 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-12 13:17:13.213131 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-12 13:17:13.213145 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-12 13:17:13.213156 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-07-12 13:17:13.213166 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-12 13:17:13.213177 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-12 13:17:13.213188 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-12 13:17:13.213198 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-12 13:17:13.213209 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-12 13:17:13.213220 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-12 13:17:13.213230 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-12 13:17:13.219581 | orchestrator | ++ semver latest 7.0.0 2025-07-12 13:17:13.266520 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-12 13:17:13.266583 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 13:17:13.266598 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-12 13:17:13.268927 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-12 13:17:25.342081 | orchestrator | 2025-07-12 13:17:25 | INFO  | Task 28b762c2-4d02-4e34-944f-90b7e23be735 (resolvconf) was prepared for execution. 2025-07-12 13:17:25.342238 | orchestrator | 2025-07-12 13:17:25 | INFO  | It takes a moment until task 28b762c2-4d02-4e34-944f-90b7e23be735 (resolvconf) has been started and output is visible here. 2025-07-12 13:17:38.927191 | orchestrator | 2025-07-12 13:17:38.927311 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-12 13:17:38.927326 | orchestrator | 2025-07-12 13:17:38.927340 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:17:38.927352 | orchestrator | Saturday 12 July 2025 13:17:29 +0000 (0:00:00.165) 0:00:00.165 ********* 2025-07-12 13:17:38.927364 | orchestrator | ok: [testbed-manager] 2025-07-12 13:17:38.927375 | orchestrator | 2025-07-12 13:17:38.927386 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 13:17:38.927398 | orchestrator | Saturday 12 July 2025 13:17:32 +0000 (0:00:03.691) 0:00:03.856 ********* 2025-07-12 13:17:38.927409 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:17:38.927420 | orchestrator | 2025-07-12 13:17:38.927436 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 13:17:38.927447 | orchestrator | Saturday 12 July 2025 13:17:33 +0000 (0:00:00.060) 0:00:03.917 ********* 2025-07-12 13:17:38.927482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-12 13:17:38.927494 | orchestrator | 2025-07-12 13:17:38.927505 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 13:17:38.927516 | orchestrator | Saturday 12 July 2025 13:17:33 +0000 (0:00:00.083) 0:00:04.000 ********* 2025-07-12 13:17:38.927527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:17:38.927538 | orchestrator | 2025-07-12 13:17:38.927549 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 13:17:38.927559 | orchestrator | Saturday 12 July 2025 13:17:33 +0000 (0:00:00.069) 0:00:04.070 ********* 2025-07-12 13:17:38.927570 | orchestrator | ok: [testbed-manager] 2025-07-12 13:17:38.927581 | orchestrator | 2025-07-12 13:17:38.927591 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 13:17:38.927602 | orchestrator | Saturday 12 July 2025 13:17:34 +0000 (0:00:01.097) 0:00:05.168 ********* 2025-07-12 13:17:38.927612 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:17:38.927623 | orchestrator | 2025-07-12 13:17:38.927634 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 13:17:38.927644 | orchestrator | Saturday 12 July 2025 13:17:34 +0000 (0:00:00.061) 0:00:05.229 ********* 2025-07-12 13:17:38.927655 | orchestrator | ok: [testbed-manager] 2025-07-12 13:17:38.927666 | orchestrator | 2025-07-12 13:17:38.927676 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 13:17:38.927687 | orchestrator | Saturday 12 July 2025 13:17:34 +0000 (0:00:00.507) 0:00:05.737 ********* 2025-07-12 13:17:38.927697 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:17:38.927708 | orchestrator | 2025-07-12 13:17:38.927719 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 13:17:38.927731 | orchestrator | Saturday 12 July 2025 13:17:34 +0000 (0:00:00.092) 0:00:05.830 ********* 2025-07-12 13:17:38.927742 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:38.927753 | orchestrator | 2025-07-12 13:17:38.927764 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 13:17:38.927774 | orchestrator | Saturday 12 July 2025 13:17:35 +0000 (0:00:00.514) 0:00:06.345 ********* 2025-07-12 13:17:38.927785 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:38.927795 | orchestrator | 2025-07-12 13:17:38.927806 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 13:17:38.927817 | orchestrator | Saturday 12 July 2025 13:17:36 +0000 (0:00:01.076) 0:00:07.422 ********* 2025-07-12 13:17:38.927827 | orchestrator | ok: [testbed-manager] 2025-07-12 13:17:38.927838 | orchestrator | 2025-07-12 13:17:38.927849 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 13:17:38.927859 | orchestrator | Saturday 12 July 2025 13:17:37 +0000 (0:00:00.943) 0:00:08.366 ********* 2025-07-12 13:17:38.927870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-12 13:17:38.927881 | orchestrator | 2025-07-12 13:17:38.927900 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 13:17:38.927912 | orchestrator | Saturday 12 July 2025 13:17:37 +0000 (0:00:00.088) 0:00:08.454 ********* 2025-07-12 13:17:38.927922 | orchestrator | changed: [testbed-manager] 2025-07-12 13:17:38.927933 | orchestrator | 2025-07-12 13:17:38.927944 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:17:38.927955 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:17:38.927966 | orchestrator | 2025-07-12 13:17:38.927977 | orchestrator | 2025-07-12 13:17:38.927988 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:17:38.928007 | orchestrator | Saturday 12 July 2025 13:17:38 +0000 (0:00:01.111) 0:00:09.566 ********* 2025-07-12 13:17:38.928018 | orchestrator | =============================================================================== 2025-07-12 13:17:38.928028 | orchestrator | Gathering Facts --------------------------------------------------------- 3.69s 2025-07-12 13:17:38.928054 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2025-07-12 13:17:38.928065 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2025-07-12 13:17:38.928076 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2025-07-12 13:17:38.928086 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-07-12 13:17:38.928097 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2025-07-12 13:17:38.928151 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2025-07-12 13:17:38.928163 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-07-12 13:17:38.928174 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-07-12 13:17:38.928185 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-07-12 13:17:38.928195 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-07-12 13:17:38.928206 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-07-12 13:17:38.928217 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-07-12 13:17:39.180772 | orchestrator | + osism apply sshconfig 2025-07-12 13:17:51.108659 | orchestrator | 2025-07-12 13:17:51 | INFO  | Task 7bc6c63f-d053-4eda-acc0-86b7d75eb2bf (sshconfig) was prepared for execution. 2025-07-12 13:17:51.108763 | orchestrator | 2025-07-12 13:17:51 | INFO  | It takes a moment until task 7bc6c63f-d053-4eda-acc0-86b7d75eb2bf (sshconfig) has been started and output is visible here. 2025-07-12 13:18:02.665076 | orchestrator | 2025-07-12 13:18:02.665241 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-12 13:18:02.665259 | orchestrator | 2025-07-12 13:18:02.665272 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-12 13:18:02.665283 | orchestrator | Saturday 12 July 2025 13:17:55 +0000 (0:00:00.174) 0:00:00.174 ********* 2025-07-12 13:18:02.665295 | orchestrator | ok: [testbed-manager] 2025-07-12 13:18:02.665306 | orchestrator | 2025-07-12 13:18:02.665318 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-12 13:18:02.665329 | orchestrator | Saturday 12 July 2025 13:17:55 +0000 (0:00:00.544) 0:00:00.718 ********* 2025-07-12 13:18:02.665339 | orchestrator | changed: [testbed-manager] 2025-07-12 13:18:02.665351 | orchestrator | 2025-07-12 13:18:02.665362 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-12 13:18:02.665373 | orchestrator | Saturday 12 July 2025 13:17:56 +0000 (0:00:00.509) 0:00:01.228 ********* 2025-07-12 13:18:02.665384 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-12 13:18:02.665395 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-12 13:18:02.665406 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-12 13:18:02.665417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-12 13:18:02.665428 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-12 13:18:02.665438 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-12 13:18:02.665470 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-12 13:18:02.665482 | orchestrator | 2025-07-12 13:18:02.665493 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-12 13:18:02.665504 | orchestrator | Saturday 12 July 2025 13:18:01 +0000 (0:00:05.690) 0:00:06.918 ********* 2025-07-12 13:18:02.665537 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:18:02.665548 | orchestrator | 2025-07-12 13:18:02.665559 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-12 13:18:02.665570 | orchestrator | Saturday 12 July 2025 13:18:01 +0000 (0:00:00.076) 0:00:06.995 ********* 2025-07-12 13:18:02.665581 | orchestrator | changed: [testbed-manager] 2025-07-12 13:18:02.665592 | orchestrator | 2025-07-12 13:18:02.665603 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:18:02.665615 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:18:02.665629 | orchestrator | 2025-07-12 13:18:02.665641 | orchestrator | 2025-07-12 13:18:02.665653 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:18:02.665666 | orchestrator | Saturday 12 July 2025 13:18:02 +0000 (0:00:00.584) 0:00:07.579 ********* 2025-07-12 13:18:02.665678 | orchestrator | =============================================================================== 2025-07-12 13:18:02.665691 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.69s 2025-07-12 13:18:02.665703 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-07-12 13:18:02.665715 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2025-07-12 13:18:02.665728 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-07-12 13:18:02.665740 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-07-12 13:18:02.943784 | orchestrator | + osism apply known-hosts 2025-07-12 13:18:14.884091 | orchestrator | 2025-07-12 13:18:14 | INFO  | Task d8c8f686-7bd3-4df6-a07b-10cea8e5af0c (known-hosts) was prepared for execution. 2025-07-12 13:18:14.884258 | orchestrator | 2025-07-12 13:18:14 | INFO  | It takes a moment until task d8c8f686-7bd3-4df6-a07b-10cea8e5af0c (known-hosts) has been started and output is visible here. 2025-07-12 13:18:31.531013 | orchestrator | 2025-07-12 13:18:31.531172 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-12 13:18:31.531192 | orchestrator | 2025-07-12 13:18:31.531205 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-12 13:18:31.531217 | orchestrator | Saturday 12 July 2025 13:18:18 +0000 (0:00:00.169) 0:00:00.169 ********* 2025-07-12 13:18:31.531229 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 13:18:31.531241 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 13:18:31.531252 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 13:18:31.531263 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 13:18:31.531274 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 13:18:31.531286 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 13:18:31.531297 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 13:18:31.531308 | orchestrator | 2025-07-12 13:18:31.531319 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-12 13:18:31.531331 | orchestrator | Saturday 12 July 2025 13:18:24 +0000 (0:00:06.079) 0:00:06.249 ********* 2025-07-12 13:18:31.531344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 13:18:31.531357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 13:18:31.531368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 13:18:31.531379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 13:18:31.531416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 13:18:31.531439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 13:18:31.531451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 13:18:31.531462 | orchestrator | 2025-07-12 13:18:31.531473 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:31.531484 | orchestrator | Saturday 12 July 2025 13:18:25 +0000 (0:00:00.174) 0:00:06.423 ********* 2025-07-12 13:18:31.531499 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBV81Lk8NBqR6OpHNtCZMkC9YI6gXKLby3fka9tJKuUSELT6IAGTORc/7GzrcGVeAFL+tUkrF0dtUXP1J4mIhohl0o6WOmOcio8msBIepa6COx9MI5WWm79pTmce9PAjb1+Ox2xoH3u/F8uV3XUsN3R6Eum3nshVJDabfsIQ2Y+ngVjgK48XLOU1sb3W/60FK4K4h2LJu/L8FUDYaxDgjfuEzz+6fMYl8gsNnh+4+UgxXmOmd43FGEyyL0XOLkCHbdoMIjtjXxH5WSg44HwJ9+cYoZk9yBcgA11xXz5/udjYjDt5Dwl9/qYM23BOMyRU3rssvSArCqa+XQQvZdhjUou9KpNviyd/+Fnmz9yQl4IFIciKR9QkZDdZRV0jwvyrfKp/0d2QxStyZK/hxs8wvZQ4iBooaAIaUCqNd/WyOVKfR9bRqCGwG4JfR3Ve5RXBZHsgKmU/57qw6WB/U3mJ9wgmrrs6+txxyV/iLwJhk951SEB7DKPv6f8yNLlvAlIq0=) 2025-07-12 13:18:31.531513 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINJ0I50+CJIgl8aTLK4T1du0YNPsRClTRNrjU4300ItI) 2025-07-12 13:18:31.531527 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLFPHgQdJwVW1URg4DQgTLKK/yDIq+Q/6s9qUSaqTAQJl7TrD6tVVTTee4xpfDHzXuZGx3jzeNdJh4/6yo4OIZc=) 2025-07-12 13:18:31.531541 | orchestrator | 2025-07-12 13:18:31.531554 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:31.531566 | orchestrator | Saturday 12 July 2025 13:18:26 +0000 (0:00:01.193) 0:00:07.616 ********* 2025-07-12 13:18:31.531578 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP2pVFvQdlz+aY6IaDV8AGlmo0at3WjByA59SLveFG7ljAyFtCO+0H8o70GiddYpST1Jbo6srjvW2qoFqXzcOb0=) 2025-07-12 13:18:31.531620 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwDygybsHMVWVjqsJVZGy1rVAUhJBWHC0j0+zU4Vwy+C8lYBLL9Y1IOlPCjHbFpqEHMvu6g0W+I3b196GG50SpP66JI6m0Z1NT1yI64gTUHSmMFKCGc28qvFjCZhb4l7TDXbeRH8IMIQQKWT+c/Y1gb0B6XpWYyJyz1z7gYkhi/oOGMsavFOuW5XZqe4wzSynbjUZWRRFvR45tjGw//MEQIOKhL4tRdqSLCIc0YwFgNJiuvlZ5hRCmk7XY6t5JeLd63Y0+a+N9hD10R7ONRx3KfW4DW/Ws47cfpBDStmVu2P0q1qCjTI0T0hfLmR8bQvjKP10+p1LX/q+ABDEyLV5A0YokbQwJ6Wp7V0k9w8Y1fO/oF8CpWdaJiq8n1ZxPzfqer5RkeenzbWbFL+GU9HsnonqgJGxHxeTDp2VbS/Wp79WvsVeAdLZ9l23gblIIT2z56jL6uOUsKuDUlDdyJv/cRSk0/+yRKJDE7lIis1o0HDwjLL+b/TmgWb/6Oqkstvs=) 2025-07-12 13:18:31.531634 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILrqyiEmg+9rigA8bB7RFyCK+dxB2RGgDYjWVzQJ2LYN) 2025-07-12 13:18:31.531646 | orchestrator | 2025-07-12 13:18:31.531659 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:31.531671 | orchestrator | Saturday 12 July 2025 13:18:27 +0000 (0:00:01.045) 0:00:08.662 ********* 2025-07-12 13:18:31.531683 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBVxMOl7v6lyJpaDkeBYxl4RtcR3hzYI2H0DE55ldJrGwA0ud5UJEnWCmD3dN11sNIcxAoCNll6aT3c5BJq2Eio=) 2025-07-12 13:18:31.531696 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZyTLmxZcUl9NliWGQglKEEr5cL5QyaZZ+9kMbeGCkV76ahzMbxNvWS+E7OqRztBaksoLv3WO2GiYOcUc5u+F+cObIRb3FYSpGY7ziz4Ss77DFvUQnHc6ql9hHL8PRHbVkp2nhETAJ+gwo0g4Ji76NkngCbl/CPKiUFpEbVkepZbsLYwggVmD+Q1bdaya7avUXtHR/O1ruIRMngSH8HFKdCh9T1L4oibsscEXw4e0GCQ+3VhDZBrU8aOD6/XGzF/Zg0fPOmmzs2TwQYjD2HCyFJe+ZsKSeEQKlU7gGuAEytpHav1ge5Ak+DtBd0/DDvJedN350PCrBrRh26u+lmIT/TaguwtD174Ih2CmgpdwqN0WehHfJDURJybheKBNeEk5Or+FDDLgsaoBfwg522YdzSMQo+IjC2izsHZB6U2uMde77Ej6KTVN6+Hexa4jZ/tAWzg+hwUjTlQsZixEC8PX2nJblammrnYoUpV9O34dZueo8++hdWfhdflLFStJ6f6c=) 2025-07-12 13:18:31.531717 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHJlsbc7IQJvEW40ljXE8EN4vJ9fZ8m+/i2wezItNdUD) 2025-07-12 13:18:31.531729 | orchestrator | 2025-07-12 13:18:31.531742 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:31.531754 | orchestrator | Saturday 12 July 2025 13:18:28 +0000 (0:00:01.092) 0:00:09.754 ********* 2025-07-12 13:18:31.531830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKMaPkNeubE+4+eNlgaK3XyDfH5hPcx4o5NB+ez6oX2A) 2025-07-12 13:18:31.531845 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpbVQ9s6ZIoV9WIunsKxh3YplMiX1IxRlBy6bQiYCIFvEvqeQwRuFQ5e338R7NJXVj06CyU3/vJ4UcQ5C31fV5o5KBr5S0HF9L7DW0ThoW6nLN9+4hbSfw4VgW38+j2DWZL6hRutT7pXBwiY0pdK2gb9uaQP1hxgLIQGeXQOSOgRA5BfeJhpib5TCx928pzkhPIIQD2oYF+tCSUaiZMNSxQpAr4KxrNA3nrszgn4Cy1okU+1b164I18BoqRK+BO5ZM556y/OxDYB0HymFP70H8cRg+zIb6UuZunBdvqlv+SK6SjPq7Y1cxaoI0NDQq6mvIdV57SYZ25xHgPwHAlAHcst89farfn+Mx098nh4pbMaEgVpvxusm09e2iXEU89gNSg+d6rDn8fggspzpKWeZSj143xNiTk3mpWu5y/LOQUKEchzGefsGtmlUJrdy+WhNYr0Z8PvHbw13l5nn5yd5BSvbzHUk9d5/D2+neTxYnGOr1DdH6NtBt0mm0bdIffB0=) 2025-07-12 13:18:31.531858 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFbm2iK8pOhf5ZRpA+3ppsyBP/dxxC1PULSEnwFVDJrlYHuPmD9qgliQiFl+a1KFbD/h1Shy1UnCSDHRQk8Kk88=) 2025-07-12 13:18:31.531870 | orchestrator | 2025-07-12 13:18:31.531882 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:31.531895 | orchestrator | Saturday 12 July 2025 13:18:29 +0000 (0:00:01.063) 0:00:10.817 ********* 2025-07-12 13:18:31.531907 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC9yR/lfN8lwlxlp4ckrptKrBWZky9amjFgWCHfMz4zpUlKUysEvRQDP9DTAD/I3yG5pADHoD4RT5Qjsdi0kztY=) 2025-07-12 13:18:31.531920 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFS/GQrmfox2RixkI/cu4tOWnocT19YDw4gCAsoS8YaL) 2025-07-12 13:18:31.531931 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCW8SwVoBiX7s2L80yCS1q9wmkWCtxtXZfvIHgxeg1YhnAhDv5qREWqBBflo1YQqIgVMNMCl/WnedMp5SASIy0x0JR3mWA7B0e00yIt2sbs+VqrNv2nW1DjsTUcz4RkY4Gy+I4Ket4NiwtNAo4L/kWB3DW1SznP6tkQ7/8JWc//dYPNL4IsnRp3R54AOQN+E3+06GO+W/U6GoLC3RKHmt9d83XUr/Hy1um6wEKp+tSdg48JiigoV/+ALr0/z52f9Xe1JuLMSYUTsRaEt3YvzsNE7xoYPemf1BFcle8m/iCvh5kKaeVpG17P2U2Nq1quMP6KSHruDUIkv8b+jkOxMkRr7kBbE7xGcx7xXC4ybSzQZ8Z1qGFdIWKC9er2AZiYV0piFfz7gbiosRWtCoPlCxxcAmQndbmoq+VuhakedySEYi1/3iTssehwICqy1pp/OBteZko2PbVHyQDL5lOEiNl9hQ7+g/1Ssi5zhb3k9KwQf48/ZWp3FBxm6gGnYrK289E=) 2025-07-12 13:18:31.531943 | orchestrator | 2025-07-12 13:18:31.531954 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:31.531965 | orchestrator | Saturday 12 July 2025 13:18:30 +0000 (0:00:01.035) 0:00:11.853 ********* 2025-07-12 13:18:31.531985 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeSfTLNyUkTGOU/GPqAF8Oc86LSXpO6tg7Q2fSdVo6wzlxReYjHyMuuOm7SwLLBYJ945rRVTyW3Z8RNpoBliwv28cbM2CUBAVwrp6DU/YaQ7Kp+ImmcH68rh4adioynVGIK80FeVnL2j30nLCA7JYz73rjk+uQuGrSm5/OLYJ92oRrhpENmBQJm47r9yNB+ki/NiQO69bJ3YOxLKqaz25fqio3RYUKZUV3L6g2OW+UD2DqeYyvlA30lDE+6t89MsjTOeevSHk8Hy2bnTmS6uZXqTz2Ly/ntiiwM7ajAC0mEyuSQ5cbVpLaL83d94VS5MnU4hYsKx35hFCMH3Gq3RXJ09pdBEHCj/RzyvOMeXaUKNUlNfe2iwj1rpKuAHoXoYR/8eOKEPgg79DhKLY5Zu+yPyDYAWB7i/zE0bStb6aIHIOGCBMIY5Gu0az9s1WyoNsjbd8b38OCkZB2nkPRhGVX/JQ9kguwiPUeeOufjs/o1TjBWueJ2uppm/23w5imANU=) 2025-07-12 13:18:42.300928 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLI/DGWn7Nx6xIKexylCi26YKUgxfIZEchJC1NELghWVjeUirTIIr62PncpK2nQRIY/YsMse/kYxjJwSfv3RJw8=) 2025-07-12 13:18:42.301042 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFaK3Mw8AtVePsH9B00xLVQ0MnXxgkIgEBHC2PLA0vFC) 2025-07-12 13:18:42.301059 | orchestrator | 2025-07-12 13:18:42.301072 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:42.301085 | orchestrator | Saturday 12 July 2025 13:18:31 +0000 (0:00:00.980) 0:00:12.833 ********* 2025-07-12 13:18:42.301097 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIY6MJDLtNMiUJcisOf0mIT3x8FfQ5EPreFFPFf3sZIu) 2025-07-12 13:18:42.301110 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8M6VGRGPmqE+iUcUyCDfpZb7NSQwXPHlPhdP6wLvZVU9qJ3rh999+A4XmS769EQNQhzFQqkAynA5J86accefNJAznEZzq93EsdxVzF1WmhLf+4v+Qw+u3+dWUDqkdk/Gvhd9ea63QVsunsfT8TKjA7t2wphEPbXOu0DaulPPK5zIuDtCAJXV0WCxulUKpQ1rc0DYv7i0UA+ZwtforEpZNw2YB2GNe+g5kEIsp7/r8EL2IKapvOZI6pfTZos0i83KDINgtiFvOqI7kA/PVowyK/Qn1o6+sNQo6vOn1uw6CBi2HNic5BdYJLMI0E9oq20Xlew1l6E0jHp9S+pGwDxs7QF2m27cun4tCrQQXoUOQKUJJaRSC0wpG0NoRJz4RtlhvBhSRP/DHrSLzfLvwhn+TmVAeFMLiVZWbpO9yUmOihRs6cnhOD1ew9CbL28UkEk+TQvXBd87k6aXETpImrl7iX+ZcBOEYxcS7bAdi5TIABnA73BS+6dSCAAISLfff130=) 2025-07-12 13:18:42.301124 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEiofz0GVA0d4HRkfkSQvfIjsbWTT8u1YXDrWg/HdAEwUe1lEYi4cG3KaNgjlttHo2Buy6xcIhDuej9Tkk/StDs=) 2025-07-12 13:18:42.301180 | orchestrator | 2025-07-12 13:18:42.301192 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-07-12 13:18:42.301222 | orchestrator | Saturday 12 July 2025 13:18:32 +0000 (0:00:01.034) 0:00:13.868 ********* 2025-07-12 13:18:42.301234 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 13:18:42.301246 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 13:18:42.301256 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 13:18:42.301267 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 13:18:42.301277 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 13:18:42.301288 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 13:18:42.301298 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 13:18:42.301309 | orchestrator | 2025-07-12 13:18:42.301320 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-07-12 13:18:42.301332 | orchestrator | Saturday 12 July 2025 13:18:37 +0000 (0:00:05.233) 0:00:19.102 ********* 2025-07-12 13:18:42.301344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 13:18:42.301356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 13:18:42.301367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 13:18:42.301378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 13:18:42.301388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 13:18:42.301423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 13:18:42.301434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 13:18:42.301446 | orchestrator | 2025-07-12 13:18:42.301458 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:42.301470 | orchestrator | Saturday 12 July 2025 13:18:37 +0000 (0:00:00.171) 0:00:19.274 ********* 2025-07-12 13:18:42.301482 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINJ0I50+CJIgl8aTLK4T1du0YNPsRClTRNrjU4300ItI) 2025-07-12 13:18:42.301520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBV81Lk8NBqR6OpHNtCZMkC9YI6gXKLby3fka9tJKuUSELT6IAGTORc/7GzrcGVeAFL+tUkrF0dtUXP1J4mIhohl0o6WOmOcio8msBIepa6COx9MI5WWm79pTmce9PAjb1+Ox2xoH3u/F8uV3XUsN3R6Eum3nshVJDabfsIQ2Y+ngVjgK48XLOU1sb3W/60FK4K4h2LJu/L8FUDYaxDgjfuEzz+6fMYl8gsNnh+4+UgxXmOmd43FGEyyL0XOLkCHbdoMIjtjXxH5WSg44HwJ9+cYoZk9yBcgA11xXz5/udjYjDt5Dwl9/qYM23BOMyRU3rssvSArCqa+XQQvZdhjUou9KpNviyd/+Fnmz9yQl4IFIciKR9QkZDdZRV0jwvyrfKp/0d2QxStyZK/hxs8wvZQ4iBooaAIaUCqNd/WyOVKfR9bRqCGwG4JfR3Ve5RXBZHsgKmU/57qw6WB/U3mJ9wgmrrs6+txxyV/iLwJhk951SEB7DKPv6f8yNLlvAlIq0=) 2025-07-12 13:18:42.301534 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLFPHgQdJwVW1URg4DQgTLKK/yDIq+Q/6s9qUSaqTAQJl7TrD6tVVTTee4xpfDHzXuZGx3jzeNdJh4/6yo4OIZc=) 2025-07-12 13:18:42.301546 | orchestrator | 2025-07-12 13:18:42.301558 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:42.301570 | orchestrator | Saturday 12 July 2025 13:18:39 +0000 (0:00:01.102) 0:00:20.377 ********* 2025-07-12 13:18:42.301583 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP2pVFvQdlz+aY6IaDV8AGlmo0at3WjByA59SLveFG7ljAyFtCO+0H8o70GiddYpST1Jbo6srjvW2qoFqXzcOb0=) 2025-07-12 13:18:42.301596 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwDygybsHMVWVjqsJVZGy1rVAUhJBWHC0j0+zU4Vwy+C8lYBLL9Y1IOlPCjHbFpqEHMvu6g0W+I3b196GG50SpP66JI6m0Z1NT1yI64gTUHSmMFKCGc28qvFjCZhb4l7TDXbeRH8IMIQQKWT+c/Y1gb0B6XpWYyJyz1z7gYkhi/oOGMsavFOuW5XZqe4wzSynbjUZWRRFvR45tjGw//MEQIOKhL4tRdqSLCIc0YwFgNJiuvlZ5hRCmk7XY6t5JeLd63Y0+a+N9hD10R7ONRx3KfW4DW/Ws47cfpBDStmVu2P0q1qCjTI0T0hfLmR8bQvjKP10+p1LX/q+ABDEyLV5A0YokbQwJ6Wp7V0k9w8Y1fO/oF8CpWdaJiq8n1ZxPzfqer5RkeenzbWbFL+GU9HsnonqgJGxHxeTDp2VbS/Wp79WvsVeAdLZ9l23gblIIT2z56jL6uOUsKuDUlDdyJv/cRSk0/+yRKJDE7lIis1o0HDwjLL+b/TmgWb/6Oqkstvs=) 2025-07-12 13:18:42.301608 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILrqyiEmg+9rigA8bB7RFyCK+dxB2RGgDYjWVzQJ2LYN) 2025-07-12 13:18:42.301620 | orchestrator | 2025-07-12 13:18:42.301631 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:42.301643 | orchestrator | Saturday 12 July 2025 13:18:40 +0000 (0:00:01.114) 0:00:21.491 ********* 2025-07-12 13:18:42.301661 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZyTLmxZcUl9NliWGQglKEEr5cL5QyaZZ+9kMbeGCkV76ahzMbxNvWS+E7OqRztBaksoLv3WO2GiYOcUc5u+F+cObIRb3FYSpGY7ziz4Ss77DFvUQnHc6ql9hHL8PRHbVkp2nhETAJ+gwo0g4Ji76NkngCbl/CPKiUFpEbVkepZbsLYwggVmD+Q1bdaya7avUXtHR/O1ruIRMngSH8HFKdCh9T1L4oibsscEXw4e0GCQ+3VhDZBrU8aOD6/XGzF/Zg0fPOmmzs2TwQYjD2HCyFJe+ZsKSeEQKlU7gGuAEytpHav1ge5Ak+DtBd0/DDvJedN350PCrBrRh26u+lmIT/TaguwtD174Ih2CmgpdwqN0WehHfJDURJybheKBNeEk5Or+FDDLgsaoBfwg522YdzSMQo+IjC2izsHZB6U2uMde77Ej6KTVN6+Hexa4jZ/tAWzg+hwUjTlQsZixEC8PX2nJblammrnYoUpV9O34dZueo8++hdWfhdflLFStJ6f6c=) 2025-07-12 13:18:42.301674 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBVxMOl7v6lyJpaDkeBYxl4RtcR3hzYI2H0DE55ldJrGwA0ud5UJEnWCmD3dN11sNIcxAoCNll6aT3c5BJq2Eio=) 2025-07-12 13:18:42.301694 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHJlsbc7IQJvEW40ljXE8EN4vJ9fZ8m+/i2wezItNdUD) 2025-07-12 13:18:42.301705 | orchestrator | 2025-07-12 13:18:42.301718 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:42.301729 | orchestrator | Saturday 12 July 2025 13:18:41 +0000 (0:00:01.069) 0:00:22.561 ********* 2025-07-12 13:18:42.301741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFbm2iK8pOhf5ZRpA+3ppsyBP/dxxC1PULSEnwFVDJrlYHuPmD9qgliQiFl+a1KFbD/h1Shy1UnCSDHRQk8Kk88=) 2025-07-12 13:18:42.301753 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpbVQ9s6ZIoV9WIunsKxh3YplMiX1IxRlBy6bQiYCIFvEvqeQwRuFQ5e338R7NJXVj06CyU3/vJ4UcQ5C31fV5o5KBr5S0HF9L7DW0ThoW6nLN9+4hbSfw4VgW38+j2DWZL6hRutT7pXBwiY0pdK2gb9uaQP1hxgLIQGeXQOSOgRA5BfeJhpib5TCx928pzkhPIIQD2oYF+tCSUaiZMNSxQpAr4KxrNA3nrszgn4Cy1okU+1b164I18BoqRK+BO5ZM556y/OxDYB0HymFP70H8cRg+zIb6UuZunBdvqlv+SK6SjPq7Y1cxaoI0NDQq6mvIdV57SYZ25xHgPwHAlAHcst89farfn+Mx098nh4pbMaEgVpvxusm09e2iXEU89gNSg+d6rDn8fggspzpKWeZSj143xNiTk3mpWu5y/LOQUKEchzGefsGtmlUJrdy+WhNYr0Z8PvHbw13l5nn5yd5BSvbzHUk9d5/D2+neTxYnGOr1DdH6NtBt0mm0bdIffB0=) 2025-07-12 13:18:42.301776 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKMaPkNeubE+4+eNlgaK3XyDfH5hPcx4o5NB+ez6oX2A) 2025-07-12 13:18:46.580666 | orchestrator | 2025-07-12 13:18:46.580784 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:46.580801 | orchestrator | Saturday 12 July 2025 13:18:42 +0000 (0:00:01.036) 0:00:23.597 ********* 2025-07-12 13:18:46.580816 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCW8SwVoBiX7s2L80yCS1q9wmkWCtxtXZfvIHgxeg1YhnAhDv5qREWqBBflo1YQqIgVMNMCl/WnedMp5SASIy0x0JR3mWA7B0e00yIt2sbs+VqrNv2nW1DjsTUcz4RkY4Gy+I4Ket4NiwtNAo4L/kWB3DW1SznP6tkQ7/8JWc//dYPNL4IsnRp3R54AOQN+E3+06GO+W/U6GoLC3RKHmt9d83XUr/Hy1um6wEKp+tSdg48JiigoV/+ALr0/z52f9Xe1JuLMSYUTsRaEt3YvzsNE7xoYPemf1BFcle8m/iCvh5kKaeVpG17P2U2Nq1quMP6KSHruDUIkv8b+jkOxMkRr7kBbE7xGcx7xXC4ybSzQZ8Z1qGFdIWKC9er2AZiYV0piFfz7gbiosRWtCoPlCxxcAmQndbmoq+VuhakedySEYi1/3iTssehwICqy1pp/OBteZko2PbVHyQDL5lOEiNl9hQ7+g/1Ssi5zhb3k9KwQf48/ZWp3FBxm6gGnYrK289E=) 2025-07-12 13:18:46.580833 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC9yR/lfN8lwlxlp4ckrptKrBWZky9amjFgWCHfMz4zpUlKUysEvRQDP9DTAD/I3yG5pADHoD4RT5Qjsdi0kztY=) 2025-07-12 13:18:46.580847 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFS/GQrmfox2RixkI/cu4tOWnocT19YDw4gCAsoS8YaL) 2025-07-12 13:18:46.580860 | orchestrator | 2025-07-12 13:18:46.580871 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:46.580882 | orchestrator | Saturday 12 July 2025 13:18:43 +0000 (0:00:01.091) 0:00:24.689 ********* 2025-07-12 13:18:46.580894 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFaK3Mw8AtVePsH9B00xLVQ0MnXxgkIgEBHC2PLA0vFC) 2025-07-12 13:18:46.580905 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeSfTLNyUkTGOU/GPqAF8Oc86LSXpO6tg7Q2fSdVo6wzlxReYjHyMuuOm7SwLLBYJ945rRVTyW3Z8RNpoBliwv28cbM2CUBAVwrp6DU/YaQ7Kp+ImmcH68rh4adioynVGIK80FeVnL2j30nLCA7JYz73rjk+uQuGrSm5/OLYJ92oRrhpENmBQJm47r9yNB+ki/NiQO69bJ3YOxLKqaz25fqio3RYUKZUV3L6g2OW+UD2DqeYyvlA30lDE+6t89MsjTOeevSHk8Hy2bnTmS6uZXqTz2Ly/ntiiwM7ajAC0mEyuSQ5cbVpLaL83d94VS5MnU4hYsKx35hFCMH3Gq3RXJ09pdBEHCj/RzyvOMeXaUKNUlNfe2iwj1rpKuAHoXoYR/8eOKEPgg79DhKLY5Zu+yPyDYAWB7i/zE0bStb6aIHIOGCBMIY5Gu0az9s1WyoNsjbd8b38OCkZB2nkPRhGVX/JQ9kguwiPUeeOufjs/o1TjBWueJ2uppm/23w5imANU=) 2025-07-12 13:18:46.580917 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLI/DGWn7Nx6xIKexylCi26YKUgxfIZEchJC1NELghWVjeUirTIIr62PncpK2nQRIY/YsMse/kYxjJwSfv3RJw8=) 2025-07-12 13:18:46.580953 | orchestrator | 2025-07-12 13:18:46.580964 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 13:18:46.580975 | orchestrator | Saturday 12 July 2025 13:18:44 +0000 (0:00:01.080) 0:00:25.770 ********* 2025-07-12 13:18:46.580986 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8M6VGRGPmqE+iUcUyCDfpZb7NSQwXPHlPhdP6wLvZVU9qJ3rh999+A4XmS769EQNQhzFQqkAynA5J86accefNJAznEZzq93EsdxVzF1WmhLf+4v+Qw+u3+dWUDqkdk/Gvhd9ea63QVsunsfT8TKjA7t2wphEPbXOu0DaulPPK5zIuDtCAJXV0WCxulUKpQ1rc0DYv7i0UA+ZwtforEpZNw2YB2GNe+g5kEIsp7/r8EL2IKapvOZI6pfTZos0i83KDINgtiFvOqI7kA/PVowyK/Qn1o6+sNQo6vOn1uw6CBi2HNic5BdYJLMI0E9oq20Xlew1l6E0jHp9S+pGwDxs7QF2m27cun4tCrQQXoUOQKUJJaRSC0wpG0NoRJz4RtlhvBhSRP/DHrSLzfLvwhn+TmVAeFMLiVZWbpO9yUmOihRs6cnhOD1ew9CbL28UkEk+TQvXBd87k6aXETpImrl7iX+ZcBOEYxcS7bAdi5TIABnA73BS+6dSCAAISLfff130=) 2025-07-12 13:18:46.580998 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEiofz0GVA0d4HRkfkSQvfIjsbWTT8u1YXDrWg/HdAEwUe1lEYi4cG3KaNgjlttHo2Buy6xcIhDuej9Tkk/StDs=) 2025-07-12 13:18:46.581009 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIY6MJDLtNMiUJcisOf0mIT3x8FfQ5EPreFFPFf3sZIu) 2025-07-12 13:18:46.581020 | orchestrator | 2025-07-12 13:18:46.581031 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-07-12 13:18:46.581042 | orchestrator | Saturday 12 July 2025 13:18:45 +0000 (0:00:01.074) 0:00:26.844 ********* 2025-07-12 13:18:46.581053 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 13:18:46.581065 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 13:18:46.581076 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 13:18:46.581086 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 13:18:46.581097 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 13:18:46.581107 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 13:18:46.581118 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 13:18:46.581242 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:18:46.581257 | orchestrator | 2025-07-12 13:18:46.581289 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-07-12 13:18:46.581302 | orchestrator | Saturday 12 July 2025 13:18:45 +0000 (0:00:00.183) 0:00:27.028 ********* 2025-07-12 13:18:46.581314 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:18:46.581326 | orchestrator | 2025-07-12 13:18:46.581338 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-07-12 13:18:46.581350 | orchestrator | Saturday 12 July 2025 13:18:45 +0000 (0:00:00.059) 0:00:27.087 ********* 2025-07-12 13:18:46.581362 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:18:46.581374 | orchestrator | 2025-07-12 13:18:46.581391 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-07-12 13:18:46.581403 | orchestrator | Saturday 12 July 2025 13:18:45 +0000 (0:00:00.052) 0:00:27.139 ********* 2025-07-12 13:18:46.581415 | orchestrator | changed: [testbed-manager] 2025-07-12 13:18:46.581426 | orchestrator | 2025-07-12 13:18:46.581438 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:18:46.581450 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:18:46.581463 | orchestrator | 2025-07-12 13:18:46.581474 | orchestrator | 2025-07-12 13:18:46.581486 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:18:46.581497 | orchestrator | Saturday 12 July 2025 13:18:46 +0000 (0:00:00.488) 0:00:27.628 ********* 2025-07-12 13:18:46.581509 | orchestrator | =============================================================================== 2025-07-12 13:18:46.581531 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.08s 2025-07-12 13:18:46.581543 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.23s 2025-07-12 13:18:46.581556 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-07-12 13:18:46.581566 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-07-12 13:18:46.581577 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-12 13:18:46.581588 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-07-12 13:18:46.581598 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-07-12 13:18:46.581609 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-12 13:18:46.581619 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-12 13:18:46.581630 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-12 13:18:46.581641 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-07-12 13:18:46.581652 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-12 13:18:46.581662 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-12 13:18:46.581673 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-12 13:18:46.581683 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-07-12 13:18:46.581694 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-07-12 13:18:46.581705 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2025-07-12 13:18:46.581715 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-07-12 13:18:46.581726 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-07-12 13:18:46.581737 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-07-12 13:18:46.854603 | orchestrator | + osism apply squid 2025-07-12 13:18:58.783081 | orchestrator | 2025-07-12 13:18:58 | INFO  | Task 0a4cc6ec-d9e3-4584-905e-caebe5b5f27f (squid) was prepared for execution. 2025-07-12 13:18:58.783224 | orchestrator | 2025-07-12 13:18:58 | INFO  | It takes a moment until task 0a4cc6ec-d9e3-4584-905e-caebe5b5f27f (squid) has been started and output is visible here. 2025-07-12 13:20:52.839596 | orchestrator | 2025-07-12 13:20:52.839716 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-07-12 13:20:52.839733 | orchestrator | 2025-07-12 13:20:52.839745 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-07-12 13:20:52.839757 | orchestrator | Saturday 12 July 2025 13:19:02 +0000 (0:00:00.165) 0:00:00.165 ********* 2025-07-12 13:20:52.839773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:20:52.839792 | orchestrator | 2025-07-12 13:20:52.839811 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-07-12 13:20:52.839829 | orchestrator | Saturday 12 July 2025 13:19:02 +0000 (0:00:00.086) 0:00:00.251 ********* 2025-07-12 13:20:52.839847 | orchestrator | ok: [testbed-manager] 2025-07-12 13:20:52.839866 | orchestrator | 2025-07-12 13:20:52.839884 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-07-12 13:20:52.839903 | orchestrator | Saturday 12 July 2025 13:19:04 +0000 (0:00:01.411) 0:00:01.663 ********* 2025-07-12 13:20:52.839922 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-07-12 13:20:52.839942 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-07-12 13:20:52.839955 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-07-12 13:20:52.839992 | orchestrator | 2025-07-12 13:20:52.840004 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-07-12 13:20:52.840015 | orchestrator | Saturday 12 July 2025 13:19:05 +0000 (0:00:01.098) 0:00:02.762 ********* 2025-07-12 13:20:52.840026 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-07-12 13:20:52.840037 | orchestrator | 2025-07-12 13:20:52.840048 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-07-12 13:20:52.840059 | orchestrator | Saturday 12 July 2025 13:19:06 +0000 (0:00:01.059) 0:00:03.821 ********* 2025-07-12 13:20:52.840070 | orchestrator | ok: [testbed-manager] 2025-07-12 13:20:52.840080 | orchestrator | 2025-07-12 13:20:52.840091 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-07-12 13:20:52.840102 | orchestrator | Saturday 12 July 2025 13:19:06 +0000 (0:00:00.359) 0:00:04.181 ********* 2025-07-12 13:20:52.840113 | orchestrator | changed: [testbed-manager] 2025-07-12 13:20:52.840124 | orchestrator | 2025-07-12 13:20:52.840135 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-07-12 13:20:52.840145 | orchestrator | Saturday 12 July 2025 13:19:07 +0000 (0:00:00.907) 0:00:05.088 ********* 2025-07-12 13:20:52.840156 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-07-12 13:20:52.840168 | orchestrator | ok: [testbed-manager] 2025-07-12 13:20:52.840206 | orchestrator | 2025-07-12 13:20:52.840219 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-07-12 13:20:52.840229 | orchestrator | Saturday 12 July 2025 13:19:39 +0000 (0:00:31.628) 0:00:36.716 ********* 2025-07-12 13:20:52.840240 | orchestrator | changed: [testbed-manager] 2025-07-12 13:20:52.840250 | orchestrator | 2025-07-12 13:20:52.840261 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-07-12 13:20:52.840272 | orchestrator | Saturday 12 July 2025 13:19:51 +0000 (0:00:12.542) 0:00:49.258 ********* 2025-07-12 13:20:52.840283 | orchestrator | Pausing for 60 seconds 2025-07-12 13:20:52.840294 | orchestrator | changed: [testbed-manager] 2025-07-12 13:20:52.840305 | orchestrator | 2025-07-12 13:20:52.840316 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-07-12 13:20:52.840326 | orchestrator | Saturday 12 July 2025 13:20:51 +0000 (0:01:00.083) 0:01:49.341 ********* 2025-07-12 13:20:52.840337 | orchestrator | ok: [testbed-manager] 2025-07-12 13:20:52.840347 | orchestrator | 2025-07-12 13:20:52.840359 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-07-12 13:20:52.840369 | orchestrator | Saturday 12 July 2025 13:20:51 +0000 (0:00:00.074) 0:01:49.416 ********* 2025-07-12 13:20:52.840380 | orchestrator | changed: [testbed-manager] 2025-07-12 13:20:52.840391 | orchestrator | 2025-07-12 13:20:52.840401 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:20:52.840412 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:20:52.840423 | orchestrator | 2025-07-12 13:20:52.840434 | orchestrator | 2025-07-12 13:20:52.840445 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:20:52.840455 | orchestrator | Saturday 12 July 2025 13:20:52 +0000 (0:00:00.668) 0:01:50.084 ********* 2025-07-12 13:20:52.840466 | orchestrator | =============================================================================== 2025-07-12 13:20:52.840494 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-07-12 13:20:52.840506 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.63s 2025-07-12 13:20:52.840516 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.54s 2025-07-12 13:20:52.840528 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2025-07-12 13:20:52.840547 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.10s 2025-07-12 13:20:52.840565 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-07-12 13:20:52.840594 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.91s 2025-07-12 13:20:52.840612 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2025-07-12 13:20:52.840630 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-07-12 13:20:52.840648 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-07-12 13:20:52.840667 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-07-12 13:20:53.109861 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 13:20:53.110382 | orchestrator | ++ semver latest 9.0.0 2025-07-12 13:20:53.161014 | orchestrator | + [[ -1 -lt 0 ]] 2025-07-12 13:20:53.161099 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 13:20:53.161508 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-07-12 13:21:05.010635 | orchestrator | 2025-07-12 13:21:05 | INFO  | Task 0bd0f876-b866-46c8-ba97-95938833a4b5 (operator) was prepared for execution. 2025-07-12 13:21:05.010748 | orchestrator | 2025-07-12 13:21:05 | INFO  | It takes a moment until task 0bd0f876-b866-46c8-ba97-95938833a4b5 (operator) has been started and output is visible here. 2025-07-12 13:21:20.841992 | orchestrator | 2025-07-12 13:21:20.842157 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-07-12 13:21:20.842174 | orchestrator | 2025-07-12 13:21:20.842186 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 13:21:20.842261 | orchestrator | Saturday 12 July 2025 13:21:08 +0000 (0:00:00.160) 0:00:00.160 ********* 2025-07-12 13:21:20.842273 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:21:20.842285 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:21:20.842296 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:20.842307 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:21:20.842318 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:20.842329 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:20.842340 | orchestrator | 2025-07-12 13:21:20.842351 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-07-12 13:21:20.842362 | orchestrator | Saturday 12 July 2025 13:21:12 +0000 (0:00:03.292) 0:00:03.453 ********* 2025-07-12 13:21:20.842372 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:20.842383 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:21:20.842394 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:20.842405 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:21:20.842415 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:21:20.842426 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:20.842437 | orchestrator | 2025-07-12 13:21:20.842447 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-07-12 13:21:20.842458 | orchestrator | 2025-07-12 13:21:20.842470 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 13:21:20.842496 | orchestrator | Saturday 12 July 2025 13:21:12 +0000 (0:00:00.740) 0:00:04.194 ********* 2025-07-12 13:21:20.842508 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:21:20.842519 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:21:20.842529 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:21:20.842540 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:20.842550 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:20.842561 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:20.842572 | orchestrator | 2025-07-12 13:21:20.842582 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 13:21:20.842593 | orchestrator | Saturday 12 July 2025 13:21:13 +0000 (0:00:00.168) 0:00:04.362 ********* 2025-07-12 13:21:20.842604 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:21:20.842615 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:21:20.842625 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:21:20.842636 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:21:20.842646 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:21:20.842657 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:21:20.842668 | orchestrator | 2025-07-12 13:21:20.842678 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 13:21:20.842713 | orchestrator | Saturday 12 July 2025 13:21:13 +0000 (0:00:00.155) 0:00:04.517 ********* 2025-07-12 13:21:20.842725 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:21:20.842736 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:20.842747 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:20.842757 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:21:20.842768 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:20.842779 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:21:20.842789 | orchestrator | 2025-07-12 13:21:20.842800 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 13:21:20.842811 | orchestrator | Saturday 12 July 2025 13:21:13 +0000 (0:00:00.595) 0:00:05.113 ********* 2025-07-12 13:21:20.842822 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:20.842832 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:21:20.842843 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:20.842854 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:21:20.842865 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:21:20.842876 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:20.842886 | orchestrator | 2025-07-12 13:21:20.842897 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 13:21:20.842908 | orchestrator | Saturday 12 July 2025 13:21:14 +0000 (0:00:00.853) 0:00:05.966 ********* 2025-07-12 13:21:20.842919 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-07-12 13:21:20.842930 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-07-12 13:21:20.842940 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-07-12 13:21:20.842951 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-07-12 13:21:20.842962 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-07-12 13:21:20.842972 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-07-12 13:21:20.842983 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-07-12 13:21:20.842994 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-07-12 13:21:20.843004 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-07-12 13:21:20.843015 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-07-12 13:21:20.843025 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-07-12 13:21:20.843036 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-07-12 13:21:20.843047 | orchestrator | 2025-07-12 13:21:20.843058 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 13:21:20.843068 | orchestrator | Saturday 12 July 2025 13:21:15 +0000 (0:00:01.209) 0:00:07.176 ********* 2025-07-12 13:21:20.843079 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:21:20.843090 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:20.843100 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:21:20.843111 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:20.843121 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:21:20.843132 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:20.843143 | orchestrator | 2025-07-12 13:21:20.843153 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 13:21:20.843165 | orchestrator | Saturday 12 July 2025 13:21:17 +0000 (0:00:01.269) 0:00:08.445 ********* 2025-07-12 13:21:20.843176 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-07-12 13:21:20.843187 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-07-12 13:21:20.843222 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-07-12 13:21:20.843233 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:21:20.843262 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:21:20.843273 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:21:20.843284 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:21:20.843295 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:21:20.843313 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 13:21:20.843323 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-07-12 13:21:20.843334 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-07-12 13:21:20.843345 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-07-12 13:21:20.843355 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-07-12 13:21:20.843366 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-07-12 13:21:20.843377 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-07-12 13:21:20.843387 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:21:20.843398 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:21:20.843408 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:21:20.843419 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:21:20.843430 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:21:20.843441 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-07-12 13:21:20.843451 | orchestrator | 2025-07-12 13:21:20.843462 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 13:21:20.843475 | orchestrator | Saturday 12 July 2025 13:21:18 +0000 (0:00:01.310) 0:00:09.756 ********* 2025-07-12 13:21:20.843485 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:21:20.843496 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:21:20.843507 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:21:20.843518 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:21:20.843528 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:21:20.843539 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:21:20.843549 | orchestrator | 2025-07-12 13:21:20.843560 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 13:21:20.843571 | orchestrator | Saturday 12 July 2025 13:21:18 +0000 (0:00:00.178) 0:00:09.934 ********* 2025-07-12 13:21:20.843589 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:21:20.843600 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:21:20.843611 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:20.843622 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:20.843632 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:20.843643 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:21:20.843654 | orchestrator | 2025-07-12 13:21:20.843665 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 13:21:20.843676 | orchestrator | Saturday 12 July 2025 13:21:19 +0000 (0:00:00.617) 0:00:10.552 ********* 2025-07-12 13:21:20.843687 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:21:20.843697 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:21:20.843711 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:21:20.843730 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:21:20.843749 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:21:20.843767 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:21:20.843787 | orchestrator | 2025-07-12 13:21:20.843806 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 13:21:20.843820 | orchestrator | Saturday 12 July 2025 13:21:19 +0000 (0:00:00.227) 0:00:10.780 ********* 2025-07-12 13:21:20.843831 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:21:20.843841 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:21:20.843852 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 13:21:20.843863 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:20.843874 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:20.843884 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:21:20.843895 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:21:20.843905 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:20.843928 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 13:21:20.843940 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:21:20.843951 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:21:20.843962 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:21:20.843972 | orchestrator | 2025-07-12 13:21:20.843983 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 13:21:20.843994 | orchestrator | Saturday 12 July 2025 13:21:20 +0000 (0:00:00.716) 0:00:11.497 ********* 2025-07-12 13:21:20.844005 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:21:20.844015 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:21:20.844026 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:21:20.844037 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:21:20.844047 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:21:20.844058 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:21:20.844069 | orchestrator | 2025-07-12 13:21:20.844079 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 13:21:20.844090 | orchestrator | Saturday 12 July 2025 13:21:20 +0000 (0:00:00.188) 0:00:11.685 ********* 2025-07-12 13:21:20.844101 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:21:20.844112 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:21:20.844122 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:21:20.844133 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:21:20.844144 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:21:20.844154 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:21:20.844165 | orchestrator | 2025-07-12 13:21:20.844176 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 13:21:20.844186 | orchestrator | Saturday 12 July 2025 13:21:20 +0000 (0:00:00.194) 0:00:11.880 ********* 2025-07-12 13:21:20.844218 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:21:20.844229 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:21:20.844240 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:21:20.844251 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:21:20.844270 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:21:22.085690 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:21:22.085796 | orchestrator | 2025-07-12 13:21:22.085813 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 13:21:22.085827 | orchestrator | Saturday 12 July 2025 13:21:20 +0000 (0:00:00.175) 0:00:12.056 ********* 2025-07-12 13:21:22.085838 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:21:22.085849 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:21:22.085860 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:21:22.085870 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:21:22.085881 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:21:22.085891 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:21:22.085902 | orchestrator | 2025-07-12 13:21:22.085913 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 13:21:22.085923 | orchestrator | Saturday 12 July 2025 13:21:21 +0000 (0:00:00.668) 0:00:12.725 ********* 2025-07-12 13:21:22.085934 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:21:22.085944 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:21:22.085955 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:21:22.085965 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:21:22.085976 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:21:22.085987 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:21:22.085998 | orchestrator | 2025-07-12 13:21:22.086008 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:21:22.086100 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:21:22.086115 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:21:22.086146 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:21:22.086158 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:21:22.086169 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:21:22.086180 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:21:22.086245 | orchestrator | 2025-07-12 13:21:22.086259 | orchestrator | 2025-07-12 13:21:22.086272 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:21:22.086285 | orchestrator | Saturday 12 July 2025 13:21:21 +0000 (0:00:00.234) 0:00:12.959 ********* 2025-07-12 13:21:22.086297 | orchestrator | =============================================================================== 2025-07-12 13:21:22.086309 | orchestrator | Gathering Facts --------------------------------------------------------- 3.29s 2025-07-12 13:21:22.086322 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.31s 2025-07-12 13:21:22.086334 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-07-12 13:21:22.086346 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-07-12 13:21:22.086358 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.85s 2025-07-12 13:21:22.086370 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2025-07-12 13:21:22.086383 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-07-12 13:21:22.086394 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-07-12 13:21:22.086406 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2025-07-12 13:21:22.086418 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-07-12 13:21:22.086430 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-07-12 13:21:22.086442 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.23s 2025-07-12 13:21:22.086455 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2025-07-12 13:21:22.086467 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2025-07-12 13:21:22.086479 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2025-07-12 13:21:22.086490 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-07-12 13:21:22.086500 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-07-12 13:21:22.086512 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-07-12 13:21:22.391823 | orchestrator | + osism apply --environment custom facts 2025-07-12 13:21:24.171668 | orchestrator | 2025-07-12 13:21:24 | INFO  | Trying to run play facts in environment custom 2025-07-12 13:21:34.321820 | orchestrator | 2025-07-12 13:21:34 | INFO  | Task 95651b24-8bee-4ef8-97b4-21e3e7218479 (facts) was prepared for execution. 2025-07-12 13:21:34.321960 | orchestrator | 2025-07-12 13:21:34 | INFO  | It takes a moment until task 95651b24-8bee-4ef8-97b4-21e3e7218479 (facts) has been started and output is visible here. 2025-07-12 13:22:14.044685 | orchestrator | 2025-07-12 13:22:14.044809 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-07-12 13:22:14.044828 | orchestrator | 2025-07-12 13:22:14.044841 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 13:22:14.044854 | orchestrator | Saturday 12 July 2025 13:21:38 +0000 (0:00:00.085) 0:00:00.085 ********* 2025-07-12 13:22:14.044893 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:14.044908 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:14.044921 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:14.044932 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:14.044944 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:14.044956 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:14.044968 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:14.044980 | orchestrator | 2025-07-12 13:22:14.044992 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-07-12 13:22:14.045004 | orchestrator | Saturday 12 July 2025 13:21:39 +0000 (0:00:01.443) 0:00:01.529 ********* 2025-07-12 13:22:14.045016 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:14.045028 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:14.045041 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:14.045053 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:14.045064 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:14.045076 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:14.045088 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:14.045100 | orchestrator | 2025-07-12 13:22:14.045112 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-07-12 13:22:14.045124 | orchestrator | 2025-07-12 13:22:14.045136 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 13:22:14.045148 | orchestrator | Saturday 12 July 2025 13:21:40 +0000 (0:00:01.201) 0:00:02.731 ********* 2025-07-12 13:22:14.045160 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:14.045173 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:14.045184 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:14.045196 | orchestrator | 2025-07-12 13:22:14.045211 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 13:22:14.045250 | orchestrator | Saturday 12 July 2025 13:21:40 +0000 (0:00:00.107) 0:00:02.838 ********* 2025-07-12 13:22:14.045263 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:14.045275 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:14.045304 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:14.045317 | orchestrator | 2025-07-12 13:22:14.045329 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 13:22:14.045342 | orchestrator | Saturday 12 July 2025 13:21:41 +0000 (0:00:00.240) 0:00:03.079 ********* 2025-07-12 13:22:14.045354 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:14.045367 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:14.045378 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:14.045389 | orchestrator | 2025-07-12 13:22:14.045400 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 13:22:14.045411 | orchestrator | Saturday 12 July 2025 13:21:41 +0000 (0:00:00.225) 0:00:03.305 ********* 2025-07-12 13:22:14.045423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:14.045436 | orchestrator | 2025-07-12 13:22:14.045447 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 13:22:14.045458 | orchestrator | Saturday 12 July 2025 13:21:41 +0000 (0:00:00.156) 0:00:03.461 ********* 2025-07-12 13:22:14.045469 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:14.045481 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:14.045492 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:14.045502 | orchestrator | 2025-07-12 13:22:14.045513 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 13:22:14.045524 | orchestrator | Saturday 12 July 2025 13:21:41 +0000 (0:00:00.418) 0:00:03.880 ********* 2025-07-12 13:22:14.045535 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:14.045545 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:14.045556 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:14.045566 | orchestrator | 2025-07-12 13:22:14.045577 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 13:22:14.045597 | orchestrator | Saturday 12 July 2025 13:21:42 +0000 (0:00:00.134) 0:00:04.015 ********* 2025-07-12 13:22:14.045608 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:14.045619 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:14.045629 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:14.045640 | orchestrator | 2025-07-12 13:22:14.045651 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 13:22:14.045661 | orchestrator | Saturday 12 July 2025 13:21:43 +0000 (0:00:01.037) 0:00:05.052 ********* 2025-07-12 13:22:14.045672 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:14.045683 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:14.045694 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:14.045704 | orchestrator | 2025-07-12 13:22:14.045715 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 13:22:14.045726 | orchestrator | Saturday 12 July 2025 13:21:43 +0000 (0:00:00.449) 0:00:05.502 ********* 2025-07-12 13:22:14.045736 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:14.045747 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:14.045758 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:14.045769 | orchestrator | 2025-07-12 13:22:14.045779 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 13:22:14.045790 | orchestrator | Saturday 12 July 2025 13:21:44 +0000 (0:00:00.994) 0:00:06.496 ********* 2025-07-12 13:22:14.045801 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:14.045812 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:14.045823 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:14.045833 | orchestrator | 2025-07-12 13:22:14.045844 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-07-12 13:22:14.045855 | orchestrator | Saturday 12 July 2025 13:21:58 +0000 (0:00:13.479) 0:00:19.975 ********* 2025-07-12 13:22:14.045866 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:14.045876 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:14.045887 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:14.045898 | orchestrator | 2025-07-12 13:22:14.045909 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-07-12 13:22:14.045937 | orchestrator | Saturday 12 July 2025 13:21:58 +0000 (0:00:00.103) 0:00:20.079 ********* 2025-07-12 13:22:14.045949 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:14.045960 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:14.045970 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:14.045981 | orchestrator | 2025-07-12 13:22:14.045992 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 13:22:14.046003 | orchestrator | Saturday 12 July 2025 13:22:05 +0000 (0:00:07.038) 0:00:27.118 ********* 2025-07-12 13:22:14.046067 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:14.046081 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:14.046092 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:14.046103 | orchestrator | 2025-07-12 13:22:14.046114 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 13:22:14.046125 | orchestrator | Saturday 12 July 2025 13:22:05 +0000 (0:00:00.423) 0:00:27.542 ********* 2025-07-12 13:22:14.046135 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-07-12 13:22:14.046146 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-07-12 13:22:14.046157 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-07-12 13:22:14.046168 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-07-12 13:22:14.046178 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-07-12 13:22:14.046189 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-07-12 13:22:14.046206 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-07-12 13:22:14.046234 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-07-12 13:22:14.046245 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-07-12 13:22:14.046264 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-07-12 13:22:14.046275 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-07-12 13:22:14.046286 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-07-12 13:22:14.046297 | orchestrator | 2025-07-12 13:22:14.046308 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 13:22:14.046319 | orchestrator | Saturday 12 July 2025 13:22:09 +0000 (0:00:03.389) 0:00:30.931 ********* 2025-07-12 13:22:14.046330 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:14.046341 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:14.046352 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:14.046362 | orchestrator | 2025-07-12 13:22:14.046373 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:22:14.046384 | orchestrator | 2025-07-12 13:22:14.046395 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:22:14.046407 | orchestrator | Saturday 12 July 2025 13:22:10 +0000 (0:00:01.217) 0:00:32.149 ********* 2025-07-12 13:22:14.046417 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:14.046428 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:14.046439 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:14.046450 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:14.046461 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:14.046472 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:14.046482 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:14.046493 | orchestrator | 2025-07-12 13:22:14.046504 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:22:14.046516 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:22:14.046528 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:22:14.046541 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:22:14.046552 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:22:14.046563 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:22:14.046575 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:22:14.046586 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:22:14.046597 | orchestrator | 2025-07-12 13:22:14.046608 | orchestrator | 2025-07-12 13:22:14.046618 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:22:14.046629 | orchestrator | Saturday 12 July 2025 13:22:14 +0000 (0:00:03.783) 0:00:35.932 ********* 2025-07-12 13:22:14.046640 | orchestrator | =============================================================================== 2025-07-12 13:22:14.046651 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.48s 2025-07-12 13:22:14.046662 | orchestrator | Install required packages (Debian) -------------------------------------- 7.04s 2025-07-12 13:22:14.046673 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.78s 2025-07-12 13:22:14.046684 | orchestrator | Copy fact files --------------------------------------------------------- 3.39s 2025-07-12 13:22:14.046695 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2025-07-12 13:22:14.046706 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.22s 2025-07-12 13:22:14.046725 | orchestrator | Copy fact file ---------------------------------------------------------- 1.20s 2025-07-12 13:22:14.262511 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2025-07-12 13:22:14.262618 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.99s 2025-07-12 13:22:14.262632 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-07-12 13:22:14.262645 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-07-12 13:22:14.262656 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-07-12 13:22:14.262667 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.24s 2025-07-12 13:22:14.262678 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2025-07-12 13:22:14.262689 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-07-12 13:22:14.262700 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-07-12 13:22:14.262712 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-07-12 13:22:14.262723 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-07-12 13:22:14.576059 | orchestrator | + osism apply bootstrap 2025-07-12 13:22:26.472647 | orchestrator | 2025-07-12 13:22:26 | INFO  | Task 6f30c0d5-e318-4701-86a5-ad8a6f509639 (bootstrap) was prepared for execution. 2025-07-12 13:22:26.472755 | orchestrator | 2025-07-12 13:22:26 | INFO  | It takes a moment until task 6f30c0d5-e318-4701-86a5-ad8a6f509639 (bootstrap) has been started and output is visible here. 2025-07-12 13:22:41.997676 | orchestrator | 2025-07-12 13:22:41.997783 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-07-12 13:22:41.997800 | orchestrator | 2025-07-12 13:22:41.997812 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-07-12 13:22:41.997823 | orchestrator | Saturday 12 July 2025 13:22:30 +0000 (0:00:00.169) 0:00:00.169 ********* 2025-07-12 13:22:41.997834 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:41.997846 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:41.997857 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:41.997867 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:41.997878 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:41.997888 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:41.997898 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:41.997909 | orchestrator | 2025-07-12 13:22:41.997920 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:22:41.997930 | orchestrator | 2025-07-12 13:22:41.997941 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:22:41.997952 | orchestrator | Saturday 12 July 2025 13:22:30 +0000 (0:00:00.283) 0:00:00.452 ********* 2025-07-12 13:22:41.997962 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:41.997973 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:41.997983 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:41.997994 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:41.998004 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:41.998014 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:41.998093 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:41.998104 | orchestrator | 2025-07-12 13:22:41.998133 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-07-12 13:22:41.998145 | orchestrator | 2025-07-12 13:22:41.998155 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:22:41.998166 | orchestrator | Saturday 12 July 2025 13:22:34 +0000 (0:00:03.589) 0:00:04.041 ********* 2025-07-12 13:22:41.998177 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 13:22:41.998189 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 13:22:41.998199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-07-12 13:22:41.998210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:22:41.998275 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 13:22:41.998291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:22:41.998303 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 13:22:41.998315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:22:41.998326 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 13:22:41.998339 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-07-12 13:22:41.998351 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 13:22:41.998363 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 13:22:41.998375 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 13:22:41.998388 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 13:22:41.998399 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-07-12 13:22:41.998412 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 13:22:41.998424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-07-12 13:22:41.998436 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 13:22:41.998448 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:41.998461 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 13:22:41.998473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 13:22:41.998485 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:41.998497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 13:22:41.998508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 13:22:41.998520 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-07-12 13:22:41.998532 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 13:22:41.998545 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 13:22:41.998557 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 13:22:41.998568 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-07-12 13:22:41.998579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 13:22:41.998595 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 13:22:41.998615 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 13:22:41.998632 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 13:22:41.998643 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-07-12 13:22:41.998653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:22:41.998664 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 13:22:41.998674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 13:22:41.998685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:22:41.998695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-07-12 13:22:41.998705 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-07-12 13:22:41.998716 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-07-12 13:22:41.998726 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 13:22:41.998742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:22:41.998753 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:41.998763 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-07-12 13:22:41.998774 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-07-12 13:22:41.998804 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-07-12 13:22:41.998815 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-07-12 13:22:41.998826 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-07-12 13:22:41.998848 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:41.998860 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:41.998871 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-07-12 13:22:41.998881 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-07-12 13:22:41.998891 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:41.998902 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-07-12 13:22:41.998913 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:41.998923 | orchestrator | 2025-07-12 13:22:41.998934 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-07-12 13:22:41.998944 | orchestrator | 2025-07-12 13:22:41.998955 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-07-12 13:22:41.998966 | orchestrator | Saturday 12 July 2025 13:22:34 +0000 (0:00:00.410) 0:00:04.452 ********* 2025-07-12 13:22:41.998977 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:41.998987 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:41.998998 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:41.999008 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:41.999019 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:41.999029 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:41.999040 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:41.999050 | orchestrator | 2025-07-12 13:22:41.999061 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-07-12 13:22:41.999071 | orchestrator | Saturday 12 July 2025 13:22:36 +0000 (0:00:01.199) 0:00:05.652 ********* 2025-07-12 13:22:41.999082 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:41.999092 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:41.999103 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:41.999113 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:41.999123 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:41.999134 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:41.999144 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:41.999155 | orchestrator | 2025-07-12 13:22:41.999165 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-07-12 13:22:41.999176 | orchestrator | Saturday 12 July 2025 13:22:37 +0000 (0:00:01.188) 0:00:06.840 ********* 2025-07-12 13:22:41.999187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:41.999200 | orchestrator | 2025-07-12 13:22:41.999211 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-07-12 13:22:41.999222 | orchestrator | Saturday 12 July 2025 13:22:37 +0000 (0:00:00.263) 0:00:07.104 ********* 2025-07-12 13:22:41.999252 | orchestrator | changed: [testbed-manager] 2025-07-12 13:22:41.999263 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:41.999274 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:41.999285 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:41.999295 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:41.999306 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:41.999317 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:41.999327 | orchestrator | 2025-07-12 13:22:41.999338 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-07-12 13:22:41.999349 | orchestrator | Saturday 12 July 2025 13:22:39 +0000 (0:00:01.967) 0:00:09.071 ********* 2025-07-12 13:22:41.999360 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:41.999372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:41.999384 | orchestrator | 2025-07-12 13:22:41.999395 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-07-12 13:22:41.999406 | orchestrator | Saturday 12 July 2025 13:22:39 +0000 (0:00:00.279) 0:00:09.351 ********* 2025-07-12 13:22:41.999426 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:41.999437 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:41.999448 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:41.999458 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:41.999469 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:41.999480 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:41.999490 | orchestrator | 2025-07-12 13:22:41.999501 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-07-12 13:22:41.999512 | orchestrator | Saturday 12 July 2025 13:22:40 +0000 (0:00:01.076) 0:00:10.427 ********* 2025-07-12 13:22:41.999523 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:41.999534 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:41.999544 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:41.999555 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:41.999565 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:41.999576 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:41.999586 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:41.999597 | orchestrator | 2025-07-12 13:22:41.999608 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-07-12 13:22:41.999618 | orchestrator | Saturday 12 July 2025 13:22:41 +0000 (0:00:00.573) 0:00:11.001 ********* 2025-07-12 13:22:41.999629 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:41.999640 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:41.999650 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:41.999661 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:41.999671 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:41.999682 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:41.999692 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:41.999703 | orchestrator | 2025-07-12 13:22:41.999714 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 13:22:41.999726 | orchestrator | Saturday 12 July 2025 13:22:41 +0000 (0:00:00.447) 0:00:11.448 ********* 2025-07-12 13:22:41.999736 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:41.999747 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:41.999765 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:54.009483 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:54.009611 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:54.009633 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:54.009650 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:54.009667 | orchestrator | 2025-07-12 13:22:54.009685 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 13:22:54.009703 | orchestrator | Saturday 12 July 2025 13:22:42 +0000 (0:00:00.233) 0:00:11.682 ********* 2025-07-12 13:22:54.009720 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:54.009754 | orchestrator | 2025-07-12 13:22:54.009771 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 13:22:54.009788 | orchestrator | Saturday 12 July 2025 13:22:42 +0000 (0:00:00.285) 0:00:11.967 ********* 2025-07-12 13:22:54.009804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:54.009819 | orchestrator | 2025-07-12 13:22:54.009835 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 13:22:54.009850 | orchestrator | Saturday 12 July 2025 13:22:42 +0000 (0:00:00.295) 0:00:12.262 ********* 2025-07-12 13:22:54.009865 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.009882 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:54.009897 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.009941 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.009958 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:54.009974 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:54.009988 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:54.010003 | orchestrator | 2025-07-12 13:22:54.010090 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 13:22:54.010113 | orchestrator | Saturday 12 July 2025 13:22:43 +0000 (0:00:01.238) 0:00:13.500 ********* 2025-07-12 13:22:54.010131 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:54.010150 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:54.010168 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:54.010186 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:54.010198 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:54.010209 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:54.010220 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:54.010260 | orchestrator | 2025-07-12 13:22:54.010273 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 13:22:54.010284 | orchestrator | Saturday 12 July 2025 13:22:44 +0000 (0:00:00.256) 0:00:13.757 ********* 2025-07-12 13:22:54.010295 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.010306 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:54.010317 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:54.010328 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:54.010339 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:54.010350 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.010359 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.010368 | orchestrator | 2025-07-12 13:22:54.010378 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 13:22:54.010388 | orchestrator | Saturday 12 July 2025 13:22:44 +0000 (0:00:00.563) 0:00:14.321 ********* 2025-07-12 13:22:54.010397 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:54.010407 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:54.010416 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:54.010426 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:54.010435 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:54.010445 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:54.010454 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:54.010463 | orchestrator | 2025-07-12 13:22:54.010473 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 13:22:54.010484 | orchestrator | Saturday 12 July 2025 13:22:44 +0000 (0:00:00.242) 0:00:14.563 ********* 2025-07-12 13:22:54.010494 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.010504 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:54.010555 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:54.010566 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:54.010576 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:54.010585 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:54.010595 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:54.010604 | orchestrator | 2025-07-12 13:22:54.010614 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 13:22:54.010623 | orchestrator | Saturday 12 July 2025 13:22:45 +0000 (0:00:00.551) 0:00:15.115 ********* 2025-07-12 13:22:54.010633 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.010642 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:54.010652 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:54.010661 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:54.010671 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:54.010680 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:54.010690 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:54.010699 | orchestrator | 2025-07-12 13:22:54.010709 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 13:22:54.010718 | orchestrator | Saturday 12 July 2025 13:22:46 +0000 (0:00:01.134) 0:00:16.249 ********* 2025-07-12 13:22:54.010728 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.010749 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:54.010758 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.010768 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.010777 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:54.010791 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:54.010801 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:54.010810 | orchestrator | 2025-07-12 13:22:54.010820 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 13:22:54.010830 | orchestrator | Saturday 12 July 2025 13:22:47 +0000 (0:00:01.158) 0:00:17.408 ********* 2025-07-12 13:22:54.010859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:54.010870 | orchestrator | 2025-07-12 13:22:54.010879 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 13:22:54.010889 | orchestrator | Saturday 12 July 2025 13:22:48 +0000 (0:00:00.449) 0:00:17.857 ********* 2025-07-12 13:22:54.010899 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:54.010908 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:54.010918 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:22:54.010927 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:54.010937 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:22:54.010946 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:54.010955 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:22:54.010965 | orchestrator | 2025-07-12 13:22:54.010974 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 13:22:54.010984 | orchestrator | Saturday 12 July 2025 13:22:49 +0000 (0:00:01.300) 0:00:19.158 ********* 2025-07-12 13:22:54.010994 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.011003 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:54.011012 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:54.011022 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:54.011031 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:54.011041 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.011050 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.011060 | orchestrator | 2025-07-12 13:22:54.011069 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 13:22:54.011079 | orchestrator | Saturday 12 July 2025 13:22:49 +0000 (0:00:00.226) 0:00:19.384 ********* 2025-07-12 13:22:54.011088 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.011098 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:54.011107 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:54.011117 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:54.011126 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:54.011135 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.011145 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.011154 | orchestrator | 2025-07-12 13:22:54.011164 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 13:22:54.011173 | orchestrator | Saturday 12 July 2025 13:22:50 +0000 (0:00:00.236) 0:00:19.621 ********* 2025-07-12 13:22:54.011183 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.011192 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:54.011202 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:54.011211 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:54.011220 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:54.011250 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.011261 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.011270 | orchestrator | 2025-07-12 13:22:54.011280 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 13:22:54.011290 | orchestrator | Saturday 12 July 2025 13:22:50 +0000 (0:00:00.276) 0:00:19.898 ********* 2025-07-12 13:22:54.011300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:22:54.011318 | orchestrator | 2025-07-12 13:22:54.011328 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 13:22:54.011337 | orchestrator | Saturday 12 July 2025 13:22:50 +0000 (0:00:00.296) 0:00:20.195 ********* 2025-07-12 13:22:54.011347 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.011356 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:54.011366 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:54.011375 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:54.011384 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.011394 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:54.011403 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.011412 | orchestrator | 2025-07-12 13:22:54.011422 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 13:22:54.011431 | orchestrator | Saturday 12 July 2025 13:22:51 +0000 (0:00:00.619) 0:00:20.814 ********* 2025-07-12 13:22:54.011441 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:22:54.011450 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:22:54.011460 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:22:54.011469 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:22:54.011478 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:22:54.011488 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:22:54.011497 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:22:54.011506 | orchestrator | 2025-07-12 13:22:54.011516 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 13:22:54.011526 | orchestrator | Saturday 12 July 2025 13:22:51 +0000 (0:00:00.211) 0:00:21.025 ********* 2025-07-12 13:22:54.011535 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.011545 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:54.011554 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:22:54.011563 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.011573 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.011582 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:54.011591 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:22:54.011601 | orchestrator | 2025-07-12 13:22:54.011610 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 13:22:54.011620 | orchestrator | Saturday 12 July 2025 13:22:52 +0000 (0:00:01.019) 0:00:22.045 ********* 2025-07-12 13:22:54.011629 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.011639 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:22:54.011648 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:22:54.011657 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:22:54.011666 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:22:54.011676 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.011685 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.011694 | orchestrator | 2025-07-12 13:22:54.011709 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 13:22:54.011719 | orchestrator | Saturday 12 July 2025 13:22:52 +0000 (0:00:00.535) 0:00:22.580 ********* 2025-07-12 13:22:54.011728 | orchestrator | ok: [testbed-manager] 2025-07-12 13:22:54.011738 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:22:54.011747 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:22:54.011757 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:22:54.011773 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:32.108656 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.108767 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:32.108783 | orchestrator | 2025-07-12 13:23:32.108796 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 13:23:32.108808 | orchestrator | Saturday 12 July 2025 13:22:53 +0000 (0:00:01.012) 0:00:23.592 ********* 2025-07-12 13:23:32.108819 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.108830 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.108841 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.108852 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:32.108863 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:32.108897 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:32.108908 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:32.108919 | orchestrator | 2025-07-12 13:23:32.108930 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-07-12 13:23:32.108941 | orchestrator | Saturday 12 July 2025 13:23:07 +0000 (0:00:13.963) 0:00:37.555 ********* 2025-07-12 13:23:32.108951 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.108962 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.108973 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.108983 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.108994 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.109004 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.109015 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.109025 | orchestrator | 2025-07-12 13:23:32.109036 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-07-12 13:23:32.109047 | orchestrator | Saturday 12 July 2025 13:23:08 +0000 (0:00:00.220) 0:00:37.776 ********* 2025-07-12 13:23:32.109058 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.109069 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.109079 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.109090 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.109100 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.109111 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.109121 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.109132 | orchestrator | 2025-07-12 13:23:32.109143 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-07-12 13:23:32.109154 | orchestrator | Saturday 12 July 2025 13:23:08 +0000 (0:00:00.213) 0:00:37.990 ********* 2025-07-12 13:23:32.109164 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.109175 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.109186 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.109196 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.109207 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.109217 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.109228 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.109239 | orchestrator | 2025-07-12 13:23:32.109273 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-07-12 13:23:32.109285 | orchestrator | Saturday 12 July 2025 13:23:08 +0000 (0:00:00.227) 0:00:38.217 ********* 2025-07-12 13:23:32.109298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:23:32.109312 | orchestrator | 2025-07-12 13:23:32.109323 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-07-12 13:23:32.109334 | orchestrator | Saturday 12 July 2025 13:23:08 +0000 (0:00:00.274) 0:00:38.491 ********* 2025-07-12 13:23:32.109345 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.109356 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.109366 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.109377 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.109387 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.109397 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.109408 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.109418 | orchestrator | 2025-07-12 13:23:32.109429 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-07-12 13:23:32.109440 | orchestrator | Saturday 12 July 2025 13:23:10 +0000 (0:00:01.824) 0:00:40.315 ********* 2025-07-12 13:23:32.109450 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:32.109461 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:32.109472 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:32.109482 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:32.109493 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:23:32.109504 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:23:32.109514 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:23:32.109533 | orchestrator | 2025-07-12 13:23:32.109545 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-07-12 13:23:32.109555 | orchestrator | Saturday 12 July 2025 13:23:11 +0000 (0:00:01.052) 0:00:41.368 ********* 2025-07-12 13:23:32.109566 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.109577 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.109587 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.109598 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.109608 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.109619 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.109629 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.109641 | orchestrator | 2025-07-12 13:23:32.109651 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-07-12 13:23:32.109662 | orchestrator | Saturday 12 July 2025 13:23:12 +0000 (0:00:00.810) 0:00:42.178 ********* 2025-07-12 13:23:32.109673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:23:32.109686 | orchestrator | 2025-07-12 13:23:32.109697 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-07-12 13:23:32.109708 | orchestrator | Saturday 12 July 2025 13:23:12 +0000 (0:00:00.312) 0:00:42.490 ********* 2025-07-12 13:23:32.109719 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:32.109729 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:32.109740 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:32.109751 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:32.109762 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:23:32.109772 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:23:32.109783 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:23:32.109794 | orchestrator | 2025-07-12 13:23:32.109820 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-07-12 13:23:32.109832 | orchestrator | Saturday 12 July 2025 13:23:13 +0000 (0:00:01.025) 0:00:43.516 ********* 2025-07-12 13:23:32.109843 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:23:32.109853 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:23:32.109864 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:23:32.109875 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:23:32.109885 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:23:32.109896 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:23:32.109906 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:23:32.109917 | orchestrator | 2025-07-12 13:23:32.109928 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-07-12 13:23:32.109938 | orchestrator | Saturday 12 July 2025 13:23:14 +0000 (0:00:00.278) 0:00:43.795 ********* 2025-07-12 13:23:32.109949 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:23:32.109960 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:23:32.109971 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:32.109981 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:32.109992 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:32.110002 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:23:32.110013 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:32.110077 | orchestrator | 2025-07-12 13:23:32.110088 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-07-12 13:23:32.110099 | orchestrator | Saturday 12 July 2025 13:23:26 +0000 (0:00:12.739) 0:00:56.534 ********* 2025-07-12 13:23:32.110110 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.110121 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.110131 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.110142 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.110153 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.110164 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.110174 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.110185 | orchestrator | 2025-07-12 13:23:32.110196 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-07-12 13:23:32.110218 | orchestrator | Saturday 12 July 2025 13:23:27 +0000 (0:00:00.714) 0:00:57.249 ********* 2025-07-12 13:23:32.110229 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.110240 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.110268 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.110279 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.110289 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.110300 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.110310 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.110321 | orchestrator | 2025-07-12 13:23:32.110332 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-07-12 13:23:32.110343 | orchestrator | Saturday 12 July 2025 13:23:28 +0000 (0:00:00.918) 0:00:58.168 ********* 2025-07-12 13:23:32.110354 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.110364 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.110375 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.110385 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.110410 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.110421 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.110432 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.110443 | orchestrator | 2025-07-12 13:23:32.110454 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-07-12 13:23:32.110465 | orchestrator | Saturday 12 July 2025 13:23:28 +0000 (0:00:00.227) 0:00:58.395 ********* 2025-07-12 13:23:32.110476 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.110487 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.110497 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.110508 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.110518 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.110529 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.110539 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.110550 | orchestrator | 2025-07-12 13:23:32.110561 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-07-12 13:23:32.110572 | orchestrator | Saturday 12 July 2025 13:23:29 +0000 (0:00:00.231) 0:00:58.627 ********* 2025-07-12 13:23:32.110583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:23:32.110594 | orchestrator | 2025-07-12 13:23:32.110604 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-07-12 13:23:32.110615 | orchestrator | Saturday 12 July 2025 13:23:29 +0000 (0:00:00.282) 0:00:58.909 ********* 2025-07-12 13:23:32.110626 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.110637 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.110647 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.110658 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.110669 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.110679 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.110690 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.110700 | orchestrator | 2025-07-12 13:23:32.110711 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-07-12 13:23:32.110722 | orchestrator | Saturday 12 July 2025 13:23:31 +0000 (0:00:01.814) 0:01:00.724 ********* 2025-07-12 13:23:32.110733 | orchestrator | changed: [testbed-manager] 2025-07-12 13:23:32.110744 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:23:32.110754 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:23:32.110765 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:23:32.110776 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:23:32.110787 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:23:32.110797 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:23:32.110808 | orchestrator | 2025-07-12 13:23:32.110819 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-07-12 13:23:32.110834 | orchestrator | Saturday 12 July 2025 13:23:31 +0000 (0:00:00.728) 0:01:01.453 ********* 2025-07-12 13:23:32.110853 | orchestrator | ok: [testbed-manager] 2025-07-12 13:23:32.110864 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:23:32.110874 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:23:32.110885 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:23:32.110895 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:23:32.110906 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:23:32.110917 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:23:32.110927 | orchestrator | 2025-07-12 13:23:32.110946 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-07-12 13:25:52.522004 | orchestrator | Saturday 12 July 2025 13:23:32 +0000 (0:00:00.242) 0:01:01.696 ********* 2025-07-12 13:25:52.522173 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:52.522191 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:52.522204 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:52.522215 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:52.522225 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:52.522236 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:52.522246 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:52.522257 | orchestrator | 2025-07-12 13:25:52.522269 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-07-12 13:25:52.522280 | orchestrator | Saturday 12 July 2025 13:23:33 +0000 (0:00:01.207) 0:01:02.904 ********* 2025-07-12 13:25:52.522291 | orchestrator | changed: [testbed-manager] 2025-07-12 13:25:52.522359 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:25:52.522371 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:25:52.522382 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:25:52.522392 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:25:52.522403 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:25:52.522413 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:25:52.522424 | orchestrator | 2025-07-12 13:25:52.522435 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-07-12 13:25:52.522446 | orchestrator | Saturday 12 July 2025 13:23:34 +0000 (0:00:01.690) 0:01:04.594 ********* 2025-07-12 13:25:52.522457 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:52.522467 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:52.522478 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:52.522489 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:52.522499 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:52.522511 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:52.522522 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:52.522532 | orchestrator | 2025-07-12 13:25:52.522544 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-07-12 13:25:52.522556 | orchestrator | Saturday 12 July 2025 13:23:37 +0000 (0:00:02.474) 0:01:07.068 ********* 2025-07-12 13:25:52.522568 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:52.522579 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:52.522591 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:52.522603 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:52.522614 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:52.522626 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:52.522637 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:52.522649 | orchestrator | 2025-07-12 13:25:52.522661 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-07-12 13:25:52.522674 | orchestrator | Saturday 12 July 2025 13:24:15 +0000 (0:00:37.657) 0:01:44.726 ********* 2025-07-12 13:25:52.522685 | orchestrator | changed: [testbed-manager] 2025-07-12 13:25:52.522697 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:25:52.522709 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:25:52.522720 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:25:52.522732 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:25:52.522744 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:25:52.522756 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:25:52.522768 | orchestrator | 2025-07-12 13:25:52.522780 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-07-12 13:25:52.522817 | orchestrator | Saturday 12 July 2025 13:25:31 +0000 (0:01:16.784) 0:03:01.510 ********* 2025-07-12 13:25:52.522830 | orchestrator | ok: [testbed-manager] 2025-07-12 13:25:52.522842 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:52.522854 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:52.522865 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:52.522877 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:52.522888 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:52.522900 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:52.522911 | orchestrator | 2025-07-12 13:25:52.522922 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-07-12 13:25:52.522933 | orchestrator | Saturday 12 July 2025 13:25:33 +0000 (0:00:01.683) 0:03:03.193 ********* 2025-07-12 13:25:52.522944 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:25:52.522954 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:25:52.522965 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:25:52.522975 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:25:52.522986 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:25:52.522996 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:25:52.523006 | orchestrator | changed: [testbed-manager] 2025-07-12 13:25:52.523017 | orchestrator | 2025-07-12 13:25:52.523050 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-07-12 13:25:52.523061 | orchestrator | Saturday 12 July 2025 13:25:45 +0000 (0:00:12.238) 0:03:15.432 ********* 2025-07-12 13:25:52.523097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-07-12 13:25:52.523133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-07-12 13:25:52.523171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-07-12 13:25:52.523191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-07-12 13:25:52.523203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-07-12 13:25:52.523214 | orchestrator | 2025-07-12 13:25:52.523225 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-07-12 13:25:52.523236 | orchestrator | Saturday 12 July 2025 13:25:46 +0000 (0:00:00.438) 0:03:15.871 ********* 2025-07-12 13:25:52.523247 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 13:25:52.523258 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:52.523279 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 13:25:52.523289 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 13:25:52.523321 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:25:52.523333 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:25:52.523343 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 13:25:52.523354 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:25:52.523365 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:25:52.523375 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:25:52.523386 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:25:52.523396 | orchestrator | 2025-07-12 13:25:52.523407 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-07-12 13:25:52.523418 | orchestrator | Saturday 12 July 2025 13:25:46 +0000 (0:00:00.627) 0:03:16.498 ********* 2025-07-12 13:25:52.523428 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 13:25:52.523441 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 13:25:52.523451 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 13:25:52.523462 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 13:25:52.523473 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 13:25:52.523483 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 13:25:52.523494 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 13:25:52.523504 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 13:25:52.523515 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 13:25:52.523526 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 13:25:52.523536 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:52.523547 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 13:25:52.523558 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 13:25:52.523568 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 13:25:52.523579 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 13:25:52.523590 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 13:25:52.523600 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 13:25:52.523611 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 13:25:52.523622 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 13:25:52.523633 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 13:25:52.523644 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 13:25:52.523661 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 13:25:55.673243 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 13:25:55.673418 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:25:55.673436 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 13:25:55.673449 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 13:25:55.673460 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 13:25:55.673471 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 13:25:55.673482 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 13:25:55.673493 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 13:25:55.673504 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 13:25:55.673514 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 13:25:55.673525 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:25:55.673536 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 13:25:55.673546 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 13:25:55.673557 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 13:25:55.673567 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 13:25:55.673578 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 13:25:55.673589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 13:25:55.673599 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 13:25:55.673610 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 13:25:55.673620 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 13:25:55.673631 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 13:25:55.673642 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:25:55.673652 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 13:25:55.673663 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 13:25:55.673673 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 13:25:55.673701 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 13:25:55.673713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 13:25:55.673724 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 13:25:55.673735 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 13:25:55.673746 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 13:25:55.673757 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 13:25:55.673767 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 13:25:55.673779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 13:25:55.673790 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 13:25:55.673802 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 13:25:55.673823 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 13:25:55.673835 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 13:25:55.673847 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 13:25:55.673859 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 13:25:55.673876 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 13:25:55.673888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 13:25:55.673901 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 13:25:55.673913 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 13:25:55.673942 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 13:25:55.673955 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 13:25:55.673966 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 13:25:55.673978 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 13:25:55.673990 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 13:25:55.674002 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 13:25:55.674014 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 13:25:55.674083 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 13:25:55.674096 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 13:25:55.674108 | orchestrator | 2025-07-12 13:25:55.674121 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-07-12 13:25:55.674134 | orchestrator | Saturday 12 July 2025 13:25:52 +0000 (0:00:05.607) 0:03:22.105 ********* 2025-07-12 13:25:55.674146 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:55.674156 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:55.674167 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:55.674178 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:55.674188 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:55.674199 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:55.674209 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 13:25:55.674220 | orchestrator | 2025-07-12 13:25:55.674230 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-07-12 13:25:55.674241 | orchestrator | Saturday 12 July 2025 13:25:54 +0000 (0:00:01.569) 0:03:23.675 ********* 2025-07-12 13:25:55.674251 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 13:25:55.674262 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:55.674273 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 13:25:55.674284 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 13:25:55.674295 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:25:55.674352 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:25:55.674369 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 13:25:55.674380 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:25:55.674391 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 13:25:55.674402 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 13:25:55.674412 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 13:25:55.674423 | orchestrator | 2025-07-12 13:25:55.674434 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-07-12 13:25:55.674444 | orchestrator | Saturday 12 July 2025 13:25:54 +0000 (0:00:00.579) 0:03:24.255 ********* 2025-07-12 13:25:55.674455 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 13:25:55.674466 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:55.674477 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 13:25:55.674488 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:25:55.674498 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 13:25:55.674509 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 13:25:55.674519 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:25:55.674530 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:25:55.674541 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 13:25:55.674551 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 13:25:55.674562 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 13:25:55.674573 | orchestrator | 2025-07-12 13:25:55.674590 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-07-12 13:25:55.674601 | orchestrator | Saturday 12 July 2025 13:25:55 +0000 (0:00:00.707) 0:03:24.963 ********* 2025-07-12 13:25:55.674612 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:25:55.674622 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:25:55.674633 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:25:55.674644 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:25:55.674655 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:25:55.674674 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:26:07.295133 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:26:07.295250 | orchestrator | 2025-07-12 13:26:07.295274 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-07-12 13:26:07.295295 | orchestrator | Saturday 12 July 2025 13:25:55 +0000 (0:00:00.302) 0:03:25.265 ********* 2025-07-12 13:26:07.295361 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:07.295374 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:07.295386 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:07.295397 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:07.295408 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:07.295419 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:07.295430 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:07.295441 | orchestrator | 2025-07-12 13:26:07.295453 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-07-12 13:26:07.295465 | orchestrator | Saturday 12 July 2025 13:26:01 +0000 (0:00:05.801) 0:03:31.067 ********* 2025-07-12 13:26:07.295476 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-07-12 13:26:07.295487 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:26:07.295498 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-07-12 13:26:07.295509 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:26:07.295545 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-07-12 13:26:07.295557 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-07-12 13:26:07.295568 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:26:07.295579 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-07-12 13:26:07.295589 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:26:07.295600 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-07-12 13:26:07.295611 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:26:07.295622 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:26:07.295633 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-07-12 13:26:07.295643 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:26:07.295654 | orchestrator | 2025-07-12 13:26:07.295666 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-07-12 13:26:07.295680 | orchestrator | Saturday 12 July 2025 13:26:01 +0000 (0:00:00.303) 0:03:31.370 ********* 2025-07-12 13:26:07.295693 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-07-12 13:26:07.295709 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-07-12 13:26:07.295722 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-07-12 13:26:07.295734 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-07-12 13:26:07.295746 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-07-12 13:26:07.295758 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-07-12 13:26:07.295770 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-07-12 13:26:07.295781 | orchestrator | 2025-07-12 13:26:07.295792 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-07-12 13:26:07.295803 | orchestrator | Saturday 12 July 2025 13:26:02 +0000 (0:00:01.015) 0:03:32.386 ********* 2025-07-12 13:26:07.295816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:26:07.295831 | orchestrator | 2025-07-12 13:26:07.295842 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-07-12 13:26:07.295853 | orchestrator | Saturday 12 July 2025 13:26:03 +0000 (0:00:00.410) 0:03:32.797 ********* 2025-07-12 13:26:07.295864 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:07.295875 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:07.295885 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:07.295896 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:07.295907 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:07.295918 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:07.295928 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:07.295939 | orchestrator | 2025-07-12 13:26:07.295950 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-07-12 13:26:07.295961 | orchestrator | Saturday 12 July 2025 13:26:04 +0000 (0:00:01.300) 0:03:34.097 ********* 2025-07-12 13:26:07.295971 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:07.295982 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:07.295993 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:07.296004 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:07.296014 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:07.296025 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:07.296036 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:07.296046 | orchestrator | 2025-07-12 13:26:07.296057 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-07-12 13:26:07.296069 | orchestrator | Saturday 12 July 2025 13:26:05 +0000 (0:00:00.640) 0:03:34.738 ********* 2025-07-12 13:26:07.296079 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:07.296090 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:07.296101 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:07.296112 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:07.296123 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:07.296133 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:07.296144 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:07.296163 | orchestrator | 2025-07-12 13:26:07.296174 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-07-12 13:26:07.296185 | orchestrator | Saturday 12 July 2025 13:26:05 +0000 (0:00:00.588) 0:03:35.326 ********* 2025-07-12 13:26:07.296195 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:07.296206 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:07.296217 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:07.296228 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:07.296238 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:07.296249 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:07.296275 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:07.296286 | orchestrator | 2025-07-12 13:26:07.296298 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-07-12 13:26:07.296337 | orchestrator | Saturday 12 July 2025 13:26:06 +0000 (0:00:00.574) 0:03:35.900 ********* 2025-07-12 13:26:07.296386 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325334.0234683, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:07.296407 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325399.321523, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:07.296419 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325394.4206254, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:07.296431 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325411.3809364, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:07.296442 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325400.977633, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:07.296454 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325396.052089, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:07.296473 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1752325394.2064233, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:07.296504 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325369.1281412, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:31.846519 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325299.0541713, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:31.846645 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325302.5194852, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:31.846662 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325296.747038, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:31.846674 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325285.0231004, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:31.846686 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325289.0501099, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:31.846720 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1752325291.3944175, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 13:26:31.846733 | orchestrator | 2025-07-12 13:26:31.846746 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-07-12 13:26:31.846759 | orchestrator | Saturday 12 July 2025 13:26:07 +0000 (0:00:00.977) 0:03:36.878 ********* 2025-07-12 13:26:31.846776 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:31.846788 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:31.846799 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:31.846810 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:31.846821 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:31.846831 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:31.846842 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:31.846852 | orchestrator | 2025-07-12 13:26:31.846863 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-07-12 13:26:31.846874 | orchestrator | Saturday 12 July 2025 13:26:08 +0000 (0:00:01.127) 0:03:38.006 ********* 2025-07-12 13:26:31.846885 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:31.846896 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:31.846906 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:31.846917 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:31.846945 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:31.846956 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:31.846967 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:31.846978 | orchestrator | 2025-07-12 13:26:31.846989 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-07-12 13:26:31.847002 | orchestrator | Saturday 12 July 2025 13:26:09 +0000 (0:00:01.210) 0:03:39.216 ********* 2025-07-12 13:26:31.847014 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:31.847026 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:31.847038 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:31.847050 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:31.847062 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:31.847074 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:31.847085 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:31.847097 | orchestrator | 2025-07-12 13:26:31.847109 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-07-12 13:26:31.847122 | orchestrator | Saturday 12 July 2025 13:26:10 +0000 (0:00:01.161) 0:03:40.377 ********* 2025-07-12 13:26:31.847134 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:26:31.847146 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:26:31.847157 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:26:31.847173 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:26:31.847192 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:26:31.847205 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:26:31.847217 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:26:31.847229 | orchestrator | 2025-07-12 13:26:31.847241 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-07-12 13:26:31.847253 | orchestrator | Saturday 12 July 2025 13:26:11 +0000 (0:00:00.296) 0:03:40.674 ********* 2025-07-12 13:26:31.847275 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:31.847288 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:31.847300 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:31.847312 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:31.847324 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:31.847336 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:31.847374 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:31.847386 | orchestrator | 2025-07-12 13:26:31.847397 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-07-12 13:26:31.847408 | orchestrator | Saturday 12 July 2025 13:26:11 +0000 (0:00:00.714) 0:03:41.389 ********* 2025-07-12 13:26:31.847420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:26:31.847434 | orchestrator | 2025-07-12 13:26:31.847444 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-07-12 13:26:31.847455 | orchestrator | Saturday 12 July 2025 13:26:12 +0000 (0:00:00.396) 0:03:41.785 ********* 2025-07-12 13:26:31.847466 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:31.847476 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:31.847487 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:31.847497 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:31.847508 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:31.847518 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:31.847528 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:31.847539 | orchestrator | 2025-07-12 13:26:31.847549 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-07-12 13:26:31.847560 | orchestrator | Saturday 12 July 2025 13:26:20 +0000 (0:00:07.824) 0:03:49.610 ********* 2025-07-12 13:26:31.847571 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:31.847581 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:31.847592 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:31.847602 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:31.847613 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:31.847623 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:31.847634 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:31.847644 | orchestrator | 2025-07-12 13:26:31.847655 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-07-12 13:26:31.847666 | orchestrator | Saturday 12 July 2025 13:26:21 +0000 (0:00:01.174) 0:03:50.785 ********* 2025-07-12 13:26:31.847676 | orchestrator | ok: [testbed-manager] 2025-07-12 13:26:31.847687 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:26:31.847697 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:26:31.847707 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:26:31.847718 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:26:31.847728 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:26:31.847738 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:26:31.847749 | orchestrator | 2025-07-12 13:26:31.847759 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-07-12 13:26:31.847770 | orchestrator | Saturday 12 July 2025 13:26:22 +0000 (0:00:01.025) 0:03:51.810 ********* 2025-07-12 13:26:31.847781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:26:31.847792 | orchestrator | 2025-07-12 13:26:31.847803 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-07-12 13:26:31.847813 | orchestrator | Saturday 12 July 2025 13:26:22 +0000 (0:00:00.500) 0:03:52.310 ********* 2025-07-12 13:26:31.847829 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:26:31.847840 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:26:31.847850 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:26:31.847861 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:31.847871 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:31.847889 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:31.847900 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:26:31.847910 | orchestrator | 2025-07-12 13:26:31.847921 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-07-12 13:26:31.847931 | orchestrator | Saturday 12 July 2025 13:26:31 +0000 (0:00:08.501) 0:04:00.812 ********* 2025-07-12 13:26:31.847942 | orchestrator | changed: [testbed-manager] 2025-07-12 13:26:31.847952 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:26:31.847962 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:26:31.847980 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:40.460405 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:40.460581 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:40.460597 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:40.460610 | orchestrator | 2025-07-12 13:27:40.460622 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-07-12 13:27:40.460634 | orchestrator | Saturday 12 July 2025 13:26:31 +0000 (0:00:00.618) 0:04:01.430 ********* 2025-07-12 13:27:40.460646 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:40.460657 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:40.460667 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:40.460678 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:40.460689 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:40.460700 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:40.460711 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:40.460721 | orchestrator | 2025-07-12 13:27:40.460732 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-07-12 13:27:40.460743 | orchestrator | Saturday 12 July 2025 13:26:32 +0000 (0:00:01.079) 0:04:02.510 ********* 2025-07-12 13:27:40.460754 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:40.460765 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:40.460775 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:40.460786 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:40.460797 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:40.460807 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:40.460818 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:40.460829 | orchestrator | 2025-07-12 13:27:40.460839 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-07-12 13:27:40.460850 | orchestrator | Saturday 12 July 2025 13:26:34 +0000 (0:00:01.285) 0:04:03.795 ********* 2025-07-12 13:27:40.460861 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:40.460873 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:40.460884 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:40.460895 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:40.460905 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:40.460916 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:40.460926 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:40.460939 | orchestrator | 2025-07-12 13:27:40.460951 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-07-12 13:27:40.460964 | orchestrator | Saturday 12 July 2025 13:26:34 +0000 (0:00:00.285) 0:04:04.081 ********* 2025-07-12 13:27:40.460976 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:40.460988 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:40.461000 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:40.461012 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:40.461024 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:40.461036 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:40.461048 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:40.461059 | orchestrator | 2025-07-12 13:27:40.461071 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-07-12 13:27:40.461083 | orchestrator | Saturday 12 July 2025 13:26:34 +0000 (0:00:00.339) 0:04:04.421 ********* 2025-07-12 13:27:40.461096 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:40.461108 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:40.461120 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:40.461259 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:40.461284 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:40.461302 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:40.461317 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:40.461328 | orchestrator | 2025-07-12 13:27:40.461339 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-07-12 13:27:40.461350 | orchestrator | Saturday 12 July 2025 13:26:35 +0000 (0:00:00.316) 0:04:04.738 ********* 2025-07-12 13:27:40.461361 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:40.461371 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:40.461382 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:40.461393 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:40.461404 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:40.461415 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:40.461425 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:40.461436 | orchestrator | 2025-07-12 13:27:40.461470 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-07-12 13:27:40.461482 | orchestrator | Saturday 12 July 2025 13:26:41 +0000 (0:00:05.928) 0:04:10.666 ********* 2025-07-12 13:27:40.461494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:40.461508 | orchestrator | 2025-07-12 13:27:40.461519 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-07-12 13:27:40.461530 | orchestrator | Saturday 12 July 2025 13:26:41 +0000 (0:00:00.377) 0:04:11.044 ********* 2025-07-12 13:27:40.461540 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-07-12 13:27:40.461551 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-07-12 13:27:40.461563 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-07-12 13:27:40.461573 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-07-12 13:27:40.461584 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:40.461595 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-07-12 13:27:40.461606 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:40.461632 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-07-12 13:27:40.461643 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-07-12 13:27:40.461654 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-07-12 13:27:40.461665 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:40.461675 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:40.461686 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-07-12 13:27:40.461697 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-07-12 13:27:40.461707 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:40.461718 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-07-12 13:27:40.461729 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-07-12 13:27:40.461759 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:40.461771 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-07-12 13:27:40.461781 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-07-12 13:27:40.461792 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:40.461803 | orchestrator | 2025-07-12 13:27:40.461813 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-07-12 13:27:40.461824 | orchestrator | Saturday 12 July 2025 13:26:41 +0000 (0:00:00.351) 0:04:11.395 ********* 2025-07-12 13:27:40.461836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:40.461847 | orchestrator | 2025-07-12 13:27:40.461857 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-07-12 13:27:40.461877 | orchestrator | Saturday 12 July 2025 13:26:42 +0000 (0:00:00.436) 0:04:11.831 ********* 2025-07-12 13:27:40.461888 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-07-12 13:27:40.461899 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:40.461909 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-07-12 13:27:40.461920 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:40.461930 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-07-12 13:27:40.461941 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-07-12 13:27:40.461952 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:40.461962 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:40.461973 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-07-12 13:27:40.461983 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:40.461994 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-07-12 13:27:40.462004 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:40.462066 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-07-12 13:27:40.462079 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:40.462090 | orchestrator | 2025-07-12 13:27:40.462101 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-07-12 13:27:40.462111 | orchestrator | Saturday 12 July 2025 13:26:42 +0000 (0:00:00.303) 0:04:12.135 ********* 2025-07-12 13:27:40.462123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:40.462134 | orchestrator | 2025-07-12 13:27:40.462144 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-07-12 13:27:40.462155 | orchestrator | Saturday 12 July 2025 13:26:43 +0000 (0:00:00.514) 0:04:12.649 ********* 2025-07-12 13:27:40.462166 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:40.462177 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:40.462187 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:40.462198 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:40.462209 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:40.462219 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:40.462230 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:40.462241 | orchestrator | 2025-07-12 13:27:40.462252 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-07-12 13:27:40.462262 | orchestrator | Saturday 12 July 2025 13:27:17 +0000 (0:00:34.566) 0:04:47.216 ********* 2025-07-12 13:27:40.462273 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:40.462283 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:40.462294 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:40.462305 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:40.462315 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:40.462326 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:40.462336 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:40.462347 | orchestrator | 2025-07-12 13:27:40.462358 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-07-12 13:27:40.462368 | orchestrator | Saturday 12 July 2025 13:27:25 +0000 (0:00:07.806) 0:04:55.022 ********* 2025-07-12 13:27:40.462379 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:40.462390 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:40.462400 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:40.462411 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:40.462422 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:40.462432 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:40.462461 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:40.462472 | orchestrator | 2025-07-12 13:27:40.462483 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-07-12 13:27:40.462501 | orchestrator | Saturday 12 July 2025 13:27:32 +0000 (0:00:07.406) 0:05:02.428 ********* 2025-07-12 13:27:40.462512 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:40.462523 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:40.462533 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:40.462544 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:40.462555 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:40.462565 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:40.462576 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:40.462586 | orchestrator | 2025-07-12 13:27:40.462597 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-07-12 13:27:40.462608 | orchestrator | Saturday 12 July 2025 13:27:34 +0000 (0:00:01.755) 0:05:04.184 ********* 2025-07-12 13:27:40.462619 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:40.462629 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:40.462640 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:40.462651 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:40.462661 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:40.462672 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:40.462682 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:40.462693 | orchestrator | 2025-07-12 13:27:40.462704 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-07-12 13:27:40.462723 | orchestrator | Saturday 12 July 2025 13:27:40 +0000 (0:00:05.855) 0:05:10.040 ********* 2025-07-12 13:27:51.647214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:51.647328 | orchestrator | 2025-07-12 13:27:51.647364 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-07-12 13:27:51.647378 | orchestrator | Saturday 12 July 2025 13:27:40 +0000 (0:00:00.422) 0:05:10.463 ********* 2025-07-12 13:27:51.647390 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:51.647402 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:51.647413 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:51.647424 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:51.647434 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:51.647445 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:51.647507 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:51.647518 | orchestrator | 2025-07-12 13:27:51.647529 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-07-12 13:27:51.647540 | orchestrator | Saturday 12 July 2025 13:27:41 +0000 (0:00:00.705) 0:05:11.168 ********* 2025-07-12 13:27:51.647551 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:51.647563 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:51.647574 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:51.647584 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:51.647595 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:51.647606 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:51.647616 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:51.647627 | orchestrator | 2025-07-12 13:27:51.647637 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-07-12 13:27:51.647648 | orchestrator | Saturday 12 July 2025 13:27:43 +0000 (0:00:01.701) 0:05:12.869 ********* 2025-07-12 13:27:51.647659 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:27:51.647670 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:27:51.647681 | orchestrator | changed: [testbed-manager] 2025-07-12 13:27:51.647692 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:27:51.647703 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:27:51.647714 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:27:51.647724 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:27:51.647735 | orchestrator | 2025-07-12 13:27:51.647746 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-07-12 13:27:51.647757 | orchestrator | Saturday 12 July 2025 13:27:44 +0000 (0:00:00.792) 0:05:13.662 ********* 2025-07-12 13:27:51.647793 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:51.647804 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:51.647815 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:51.647826 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:51.647836 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:51.647846 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:51.647857 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:51.647867 | orchestrator | 2025-07-12 13:27:51.647878 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-07-12 13:27:51.647889 | orchestrator | Saturday 12 July 2025 13:27:44 +0000 (0:00:00.295) 0:05:13.957 ********* 2025-07-12 13:27:51.647900 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:51.647910 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:51.647921 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:51.647931 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:51.647941 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:51.647952 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:51.647962 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:51.647973 | orchestrator | 2025-07-12 13:27:51.647983 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-07-12 13:27:51.647994 | orchestrator | Saturday 12 July 2025 13:27:44 +0000 (0:00:00.380) 0:05:14.337 ********* 2025-07-12 13:27:51.648004 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:51.648015 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:51.648025 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:51.648036 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:51.648046 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:51.648057 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:51.648068 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:51.648078 | orchestrator | 2025-07-12 13:27:51.648089 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-07-12 13:27:51.648099 | orchestrator | Saturday 12 July 2025 13:27:45 +0000 (0:00:00.278) 0:05:14.616 ********* 2025-07-12 13:27:51.648110 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:51.648121 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:51.648131 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:51.648141 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:51.648152 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:51.648162 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:51.648172 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:51.648183 | orchestrator | 2025-07-12 13:27:51.648194 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-07-12 13:27:51.648205 | orchestrator | Saturday 12 July 2025 13:27:45 +0000 (0:00:00.288) 0:05:14.904 ********* 2025-07-12 13:27:51.648215 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:51.648226 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:51.648236 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:51.648247 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:51.648257 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:51.648273 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:51.648284 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:51.648295 | orchestrator | 2025-07-12 13:27:51.648305 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-07-12 13:27:51.648316 | orchestrator | Saturday 12 July 2025 13:27:45 +0000 (0:00:00.295) 0:05:15.200 ********* 2025-07-12 13:27:51.648327 | orchestrator | ok: [testbed-manager] =>  2025-07-12 13:27:51.648337 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:51.648348 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 13:27:51.648358 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:51.648368 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 13:27:51.648379 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:51.648389 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 13:27:51.648400 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:51.648418 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 13:27:51.648428 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:51.648494 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 13:27:51.648507 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:51.648518 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 13:27:51.648529 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 13:27:51.648540 | orchestrator | 2025-07-12 13:27:51.648550 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-07-12 13:27:51.648561 | orchestrator | Saturday 12 July 2025 13:27:45 +0000 (0:00:00.276) 0:05:15.477 ********* 2025-07-12 13:27:51.648572 | orchestrator | ok: [testbed-manager] =>  2025-07-12 13:27:51.648583 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:51.648594 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 13:27:51.648604 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:51.648615 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 13:27:51.648626 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:51.648636 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 13:27:51.648647 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:51.648657 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 13:27:51.648668 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:51.648679 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 13:27:51.648690 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:51.648700 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 13:27:51.648711 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 13:27:51.648722 | orchestrator | 2025-07-12 13:27:51.648733 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-07-12 13:27:51.648743 | orchestrator | Saturday 12 July 2025 13:27:46 +0000 (0:00:00.431) 0:05:15.908 ********* 2025-07-12 13:27:51.648754 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:51.648765 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:51.648776 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:51.648786 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:51.648797 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:51.648807 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:51.648818 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:51.648829 | orchestrator | 2025-07-12 13:27:51.648840 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-07-12 13:27:51.648850 | orchestrator | Saturday 12 July 2025 13:27:46 +0000 (0:00:00.314) 0:05:16.223 ********* 2025-07-12 13:27:51.648861 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:51.648872 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:51.648883 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:51.648893 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:27:51.648904 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:27:51.648915 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:27:51.648925 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:27:51.648936 | orchestrator | 2025-07-12 13:27:51.648947 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-07-12 13:27:51.648958 | orchestrator | Saturday 12 July 2025 13:27:46 +0000 (0:00:00.304) 0:05:16.528 ********* 2025-07-12 13:27:51.648971 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:27:51.648984 | orchestrator | 2025-07-12 13:27:51.648995 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-07-12 13:27:51.649005 | orchestrator | Saturday 12 July 2025 13:27:47 +0000 (0:00:00.392) 0:05:16.920 ********* 2025-07-12 13:27:51.649016 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:51.649027 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:51.649038 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:51.649049 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:51.649067 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:51.649078 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:51.649089 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:51.649099 | orchestrator | 2025-07-12 13:27:51.649110 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-07-12 13:27:51.649121 | orchestrator | Saturday 12 July 2025 13:27:48 +0000 (0:00:00.818) 0:05:17.739 ********* 2025-07-12 13:27:51.649132 | orchestrator | ok: [testbed-manager] 2025-07-12 13:27:51.649143 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:27:51.649153 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:27:51.649164 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:27:51.649174 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:27:51.649185 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:27:51.649196 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:27:51.649206 | orchestrator | 2025-07-12 13:27:51.649217 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-07-12 13:27:51.649229 | orchestrator | Saturday 12 July 2025 13:27:51 +0000 (0:00:02.915) 0:05:20.655 ********* 2025-07-12 13:27:51.649240 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-07-12 13:27:51.649252 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-07-12 13:27:51.649263 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-07-12 13:27:51.649273 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-07-12 13:27:51.649284 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-07-12 13:27:51.649295 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-07-12 13:27:51.649306 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:27:51.649322 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-07-12 13:27:51.649333 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-07-12 13:27:51.649344 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-07-12 13:27:51.649354 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:27:51.649365 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-07-12 13:27:51.649375 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-07-12 13:27:51.649386 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-07-12 13:27:51.649397 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:27:51.649407 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-07-12 13:27:51.649418 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-07-12 13:27:51.649436 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-07-12 13:28:52.182418 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:52.182626 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-07-12 13:28:52.182644 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-07-12 13:28:52.182656 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-07-12 13:28:52.182668 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:52.182679 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:52.182689 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-07-12 13:28:52.182700 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-07-12 13:28:52.182711 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-07-12 13:28:52.182722 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:52.182733 | orchestrator | 2025-07-12 13:28:52.182746 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-07-12 13:28:52.182758 | orchestrator | Saturday 12 July 2025 13:27:51 +0000 (0:00:00.830) 0:05:21.485 ********* 2025-07-12 13:28:52.182769 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:52.182780 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.182791 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.182803 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.182813 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.182824 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.182859 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.182871 | orchestrator | 2025-07-12 13:28:52.182882 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-07-12 13:28:52.182892 | orchestrator | Saturday 12 July 2025 13:27:58 +0000 (0:00:06.557) 0:05:28.042 ********* 2025-07-12 13:28:52.182903 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:52.182914 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.182925 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.182935 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.182946 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.182958 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.182969 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.182981 | orchestrator | 2025-07-12 13:28:52.182993 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-07-12 13:28:52.183004 | orchestrator | Saturday 12 July 2025 13:27:59 +0000 (0:00:01.102) 0:05:29.145 ********* 2025-07-12 13:28:52.183016 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:52.183028 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.183039 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.183051 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.183063 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.183074 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.183086 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.183098 | orchestrator | 2025-07-12 13:28:52.183110 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-07-12 13:28:52.183122 | orchestrator | Saturday 12 July 2025 13:28:07 +0000 (0:00:07.761) 0:05:36.906 ********* 2025-07-12 13:28:52.183134 | orchestrator | changed: [testbed-manager] 2025-07-12 13:28:52.183146 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.183158 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.183170 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.183182 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.183194 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.183206 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.183218 | orchestrator | 2025-07-12 13:28:52.183230 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-07-12 13:28:52.183242 | orchestrator | Saturday 12 July 2025 13:28:10 +0000 (0:00:03.427) 0:05:40.334 ********* 2025-07-12 13:28:52.183254 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:52.183266 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.183278 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.183290 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.183302 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.183312 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.183323 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.183333 | orchestrator | 2025-07-12 13:28:52.183344 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-07-12 13:28:52.183355 | orchestrator | Saturday 12 July 2025 13:28:12 +0000 (0:00:01.587) 0:05:41.921 ********* 2025-07-12 13:28:52.183365 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:52.183376 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.183386 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.183397 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.183408 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.183418 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.183428 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.183439 | orchestrator | 2025-07-12 13:28:52.183450 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-07-12 13:28:52.183460 | orchestrator | Saturday 12 July 2025 13:28:13 +0000 (0:00:01.357) 0:05:43.278 ********* 2025-07-12 13:28:52.183471 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:52.183500 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:52.183511 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:52.183533 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:52.183544 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:52.183554 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:52.183565 | orchestrator | changed: [testbed-manager] 2025-07-12 13:28:52.183575 | orchestrator | 2025-07-12 13:28:52.183602 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-07-12 13:28:52.183614 | orchestrator | Saturday 12 July 2025 13:28:14 +0000 (0:00:00.608) 0:05:43.887 ********* 2025-07-12 13:28:52.183624 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:52.183635 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.183645 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.183656 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.183666 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.183677 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.183687 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.183698 | orchestrator | 2025-07-12 13:28:52.183709 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-07-12 13:28:52.183720 | orchestrator | Saturday 12 July 2025 13:28:24 +0000 (0:00:10.175) 0:05:54.062 ********* 2025-07-12 13:28:52.183730 | orchestrator | changed: [testbed-manager] 2025-07-12 13:28:52.183758 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.183770 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.183781 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.183791 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.183802 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.183812 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.183823 | orchestrator | 2025-07-12 13:28:52.183834 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-07-12 13:28:52.183845 | orchestrator | Saturday 12 July 2025 13:28:25 +0000 (0:00:00.916) 0:05:54.978 ********* 2025-07-12 13:28:52.183856 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:52.183866 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.183877 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.183887 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.183898 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.183908 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.183918 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.183929 | orchestrator | 2025-07-12 13:28:52.183939 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-07-12 13:28:52.183950 | orchestrator | Saturday 12 July 2025 13:28:34 +0000 (0:00:09.493) 0:06:04.472 ********* 2025-07-12 13:28:52.183961 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:52.183971 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.183982 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.183992 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.184003 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.184013 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.184024 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.184034 | orchestrator | 2025-07-12 13:28:52.184045 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-07-12 13:28:52.184055 | orchestrator | Saturday 12 July 2025 13:28:45 +0000 (0:00:10.862) 0:06:15.335 ********* 2025-07-12 13:28:52.184066 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-07-12 13:28:52.184077 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-07-12 13:28:52.184087 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-07-12 13:28:52.184098 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-07-12 13:28:52.184108 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-07-12 13:28:52.184119 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-07-12 13:28:52.184129 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-07-12 13:28:52.184140 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-07-12 13:28:52.184151 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-07-12 13:28:52.184169 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-07-12 13:28:52.184180 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-07-12 13:28:52.184191 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-07-12 13:28:52.184201 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-07-12 13:28:52.184212 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-07-12 13:28:52.184222 | orchestrator | 2025-07-12 13:28:52.184233 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-07-12 13:28:52.184244 | orchestrator | Saturday 12 July 2025 13:28:46 +0000 (0:00:01.203) 0:06:16.538 ********* 2025-07-12 13:28:52.184254 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:52.184265 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:52.184276 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:52.184287 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:52.184297 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:52.184308 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:52.184318 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:52.184329 | orchestrator | 2025-07-12 13:28:52.184340 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-07-12 13:28:52.184350 | orchestrator | Saturday 12 July 2025 13:28:47 +0000 (0:00:00.499) 0:06:17.038 ********* 2025-07-12 13:28:52.184361 | orchestrator | ok: [testbed-manager] 2025-07-12 13:28:52.184372 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:28:52.184383 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:28:52.184393 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:28:52.184404 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:28:52.184414 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:28:52.184425 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:28:52.184436 | orchestrator | 2025-07-12 13:28:52.184446 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-07-12 13:28:52.184459 | orchestrator | Saturday 12 July 2025 13:28:51 +0000 (0:00:03.895) 0:06:20.933 ********* 2025-07-12 13:28:52.184470 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:52.184511 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:52.184522 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:52.184533 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:28:52.184543 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:28:52.184554 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:28:52.184564 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:28:52.184575 | orchestrator | 2025-07-12 13:28:52.184586 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-07-12 13:28:52.184597 | orchestrator | Saturday 12 July 2025 13:28:51 +0000 (0:00:00.508) 0:06:21.442 ********* 2025-07-12 13:28:52.184608 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-07-12 13:28:52.184619 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-07-12 13:28:52.184630 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:28:52.184640 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-07-12 13:28:52.184651 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-07-12 13:28:52.184662 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:28:52.184672 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-07-12 13:28:52.184683 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-07-12 13:28:52.184693 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:28:52.184704 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-07-12 13:28:52.184721 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-07-12 13:29:11.425640 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:11.425756 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-07-12 13:29:11.425774 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-07-12 13:29:11.425814 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:11.425826 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-07-12 13:29:11.425837 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-07-12 13:29:11.425848 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:11.425859 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-07-12 13:29:11.425870 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-07-12 13:29:11.425880 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:11.425892 | orchestrator | 2025-07-12 13:29:11.425904 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-07-12 13:29:11.425917 | orchestrator | Saturday 12 July 2025 13:28:52 +0000 (0:00:00.573) 0:06:22.015 ********* 2025-07-12 13:29:11.425928 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:11.425939 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:11.425950 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:11.425960 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:11.425971 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:11.425982 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:11.425992 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:11.426003 | orchestrator | 2025-07-12 13:29:11.426066 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-07-12 13:29:11.426079 | orchestrator | Saturday 12 July 2025 13:28:52 +0000 (0:00:00.536) 0:06:22.552 ********* 2025-07-12 13:29:11.426090 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:11.426101 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:11.426112 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:11.426124 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:11.426136 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:11.426148 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:11.426160 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:11.426172 | orchestrator | 2025-07-12 13:29:11.426184 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-07-12 13:29:11.426197 | orchestrator | Saturday 12 July 2025 13:28:53 +0000 (0:00:00.489) 0:06:23.041 ********* 2025-07-12 13:29:11.426209 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:11.426222 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:11.426234 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:11.426247 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:11.426258 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:11.426270 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:11.426282 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:11.426295 | orchestrator | 2025-07-12 13:29:11.426307 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-07-12 13:29:11.426319 | orchestrator | Saturday 12 July 2025 13:28:54 +0000 (0:00:00.667) 0:06:23.708 ********* 2025-07-12 13:29:11.426332 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:11.426344 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:11.426356 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:11.426387 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:11.426400 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:11.426412 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:11.426423 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:11.426436 | orchestrator | 2025-07-12 13:29:11.426448 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-07-12 13:29:11.426460 | orchestrator | Saturday 12 July 2025 13:28:55 +0000 (0:00:01.796) 0:06:25.505 ********* 2025-07-12 13:29:11.426473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:11.426486 | orchestrator | 2025-07-12 13:29:11.426553 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-07-12 13:29:11.426587 | orchestrator | Saturday 12 July 2025 13:28:56 +0000 (0:00:00.854) 0:06:26.359 ********* 2025-07-12 13:29:11.426598 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:11.426608 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:11.426619 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:11.426630 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:11.426641 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:11.426652 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:11.426662 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:11.426673 | orchestrator | 2025-07-12 13:29:11.426684 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-07-12 13:29:11.426695 | orchestrator | Saturday 12 July 2025 13:28:57 +0000 (0:00:00.828) 0:06:27.187 ********* 2025-07-12 13:29:11.426706 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:11.426716 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:11.426727 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:11.426738 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:11.426748 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:11.426759 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:11.426776 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:11.426787 | orchestrator | 2025-07-12 13:29:11.426798 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-07-12 13:29:11.426808 | orchestrator | Saturday 12 July 2025 13:28:58 +0000 (0:00:01.082) 0:06:28.269 ********* 2025-07-12 13:29:11.426819 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:11.426830 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:11.426841 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:11.426852 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:11.426862 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:11.426873 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:11.426884 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:11.426894 | orchestrator | 2025-07-12 13:29:11.426906 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-07-12 13:29:11.426925 | orchestrator | Saturday 12 July 2025 13:29:00 +0000 (0:00:01.374) 0:06:29.644 ********* 2025-07-12 13:29:11.426956 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:11.426968 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:11.426978 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:11.426989 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:11.427000 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:11.427011 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:11.427021 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:11.427032 | orchestrator | 2025-07-12 13:29:11.427043 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-07-12 13:29:11.427054 | orchestrator | Saturday 12 July 2025 13:29:01 +0000 (0:00:01.351) 0:06:30.996 ********* 2025-07-12 13:29:11.427065 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:11.427075 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:11.427086 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:11.427097 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:11.427108 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:11.427119 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:11.427129 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:11.427140 | orchestrator | 2025-07-12 13:29:11.427151 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-07-12 13:29:11.427161 | orchestrator | Saturday 12 July 2025 13:29:02 +0000 (0:00:01.288) 0:06:32.284 ********* 2025-07-12 13:29:11.427172 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:11.427183 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:11.427193 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:11.427204 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:11.427214 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:11.427225 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:11.427235 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:11.427255 | orchestrator | 2025-07-12 13:29:11.427266 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-07-12 13:29:11.427277 | orchestrator | Saturday 12 July 2025 13:29:04 +0000 (0:00:01.358) 0:06:33.643 ********* 2025-07-12 13:29:11.427288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:11.427299 | orchestrator | 2025-07-12 13:29:11.427310 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-07-12 13:29:11.427321 | orchestrator | Saturday 12 July 2025 13:29:05 +0000 (0:00:01.064) 0:06:34.707 ********* 2025-07-12 13:29:11.427332 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:11.427343 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:11.427354 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:11.427364 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:11.427375 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:11.427386 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:11.427397 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:11.427408 | orchestrator | 2025-07-12 13:29:11.427418 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-07-12 13:29:11.427429 | orchestrator | Saturday 12 July 2025 13:29:06 +0000 (0:00:01.503) 0:06:36.211 ********* 2025-07-12 13:29:11.427440 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:11.427451 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:11.427461 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:11.427472 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:11.427483 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:11.427516 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:11.427527 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:11.427538 | orchestrator | 2025-07-12 13:29:11.427549 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-07-12 13:29:11.427559 | orchestrator | Saturday 12 July 2025 13:29:07 +0000 (0:00:01.152) 0:06:37.363 ********* 2025-07-12 13:29:11.427570 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:11.427581 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:11.427591 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:11.427602 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:11.427612 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:11.427623 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:11.427633 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:11.427644 | orchestrator | 2025-07-12 13:29:11.427654 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-07-12 13:29:11.427665 | orchestrator | Saturday 12 July 2025 13:29:09 +0000 (0:00:01.318) 0:06:38.682 ********* 2025-07-12 13:29:11.427676 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:11.427687 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:11.427697 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:11.427708 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:11.427718 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:11.427729 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:11.427739 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:11.427750 | orchestrator | 2025-07-12 13:29:11.427761 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-07-12 13:29:11.427772 | orchestrator | Saturday 12 July 2025 13:29:10 +0000 (0:00:01.141) 0:06:39.823 ********* 2025-07-12 13:29:11.427783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:11.427793 | orchestrator | 2025-07-12 13:29:11.427810 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:11.427821 | orchestrator | Saturday 12 July 2025 13:29:11 +0000 (0:00:00.894) 0:06:40.718 ********* 2025-07-12 13:29:11.427832 | orchestrator | 2025-07-12 13:29:11.427843 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:11.427863 | orchestrator | Saturday 12 July 2025 13:29:11 +0000 (0:00:00.038) 0:06:40.756 ********* 2025-07-12 13:29:11.427874 | orchestrator | 2025-07-12 13:29:11.427884 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:11.427895 | orchestrator | Saturday 12 July 2025 13:29:11 +0000 (0:00:00.037) 0:06:40.793 ********* 2025-07-12 13:29:11.427906 | orchestrator | 2025-07-12 13:29:11.427917 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:11.427928 | orchestrator | Saturday 12 July 2025 13:29:11 +0000 (0:00:00.044) 0:06:40.838 ********* 2025-07-12 13:29:11.427938 | orchestrator | 2025-07-12 13:29:11.427957 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:37.186228 | orchestrator | Saturday 12 July 2025 13:29:11 +0000 (0:00:00.038) 0:06:40.877 ********* 2025-07-12 13:29:37.186347 | orchestrator | 2025-07-12 13:29:37.186363 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:37.186375 | orchestrator | Saturday 12 July 2025 13:29:11 +0000 (0:00:00.038) 0:06:40.915 ********* 2025-07-12 13:29:37.186387 | orchestrator | 2025-07-12 13:29:37.186397 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 13:29:37.186409 | orchestrator | Saturday 12 July 2025 13:29:11 +0000 (0:00:00.043) 0:06:40.959 ********* 2025-07-12 13:29:37.186419 | orchestrator | 2025-07-12 13:29:37.186430 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 13:29:37.186441 | orchestrator | Saturday 12 July 2025 13:29:11 +0000 (0:00:00.039) 0:06:40.998 ********* 2025-07-12 13:29:37.186452 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:37.186464 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:37.186475 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:37.186486 | orchestrator | 2025-07-12 13:29:37.186497 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-07-12 13:29:37.186549 | orchestrator | Saturday 12 July 2025 13:29:12 +0000 (0:00:01.355) 0:06:42.354 ********* 2025-07-12 13:29:37.186561 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:37.186573 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:37.186584 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:37.186595 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:37.186606 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:37.186616 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:37.186627 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:37.186638 | orchestrator | 2025-07-12 13:29:37.186649 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-07-12 13:29:37.186660 | orchestrator | Saturday 12 July 2025 13:29:14 +0000 (0:00:01.302) 0:06:43.657 ********* 2025-07-12 13:29:37.186685 | orchestrator | changed: [testbed-manager] 2025-07-12 13:29:37.186697 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:37.186708 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:37.186719 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:37.186729 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:37.186740 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:37.186751 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:37.186762 | orchestrator | 2025-07-12 13:29:37.186774 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-07-12 13:29:37.186786 | orchestrator | Saturday 12 July 2025 13:29:15 +0000 (0:00:01.150) 0:06:44.807 ********* 2025-07-12 13:29:37.186798 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:37.186809 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:37.186821 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:37.186833 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:37.186844 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:37.186856 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:37.186867 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:37.186879 | orchestrator | 2025-07-12 13:29:37.186891 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-07-12 13:29:37.186927 | orchestrator | Saturday 12 July 2025 13:29:17 +0000 (0:00:02.211) 0:06:47.018 ********* 2025-07-12 13:29:37.186940 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:37.186952 | orchestrator | 2025-07-12 13:29:37.186963 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-07-12 13:29:37.186975 | orchestrator | Saturday 12 July 2025 13:29:17 +0000 (0:00:00.123) 0:06:47.142 ********* 2025-07-12 13:29:37.186987 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:37.186999 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:37.187011 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:37.187023 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:37.187034 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:37.187046 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:37.187057 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:37.187069 | orchestrator | 2025-07-12 13:29:37.187082 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-07-12 13:29:37.187095 | orchestrator | Saturday 12 July 2025 13:29:18 +0000 (0:00:00.966) 0:06:48.108 ********* 2025-07-12 13:29:37.187107 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:37.187119 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:37.187129 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:37.187140 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:37.187150 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:37.187161 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:37.187171 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:37.187182 | orchestrator | 2025-07-12 13:29:37.187193 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-07-12 13:29:37.187204 | orchestrator | Saturday 12 July 2025 13:29:19 +0000 (0:00:00.724) 0:06:48.833 ********* 2025-07-12 13:29:37.187215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:37.187229 | orchestrator | 2025-07-12 13:29:37.187251 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-07-12 13:29:37.187263 | orchestrator | Saturday 12 July 2025 13:29:20 +0000 (0:00:00.910) 0:06:49.744 ********* 2025-07-12 13:29:37.187273 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:37.187284 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:37.187295 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:37.187305 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:37.187316 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:37.187327 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:37.187338 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:37.187348 | orchestrator | 2025-07-12 13:29:37.187359 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-07-12 13:29:37.187370 | orchestrator | Saturday 12 July 2025 13:29:21 +0000 (0:00:00.876) 0:06:50.620 ********* 2025-07-12 13:29:37.187381 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-07-12 13:29:37.187392 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-07-12 13:29:37.187420 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-07-12 13:29:37.187433 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-07-12 13:29:37.187444 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-07-12 13:29:37.187455 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-07-12 13:29:37.187466 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-07-12 13:29:37.187477 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-07-12 13:29:37.187487 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-07-12 13:29:37.187498 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-07-12 13:29:37.187532 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-07-12 13:29:37.187542 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-07-12 13:29:37.187569 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-07-12 13:29:37.187580 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-07-12 13:29:37.187591 | orchestrator | 2025-07-12 13:29:37.187602 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-07-12 13:29:37.187613 | orchestrator | Saturday 12 July 2025 13:29:23 +0000 (0:00:02.641) 0:06:53.262 ********* 2025-07-12 13:29:37.187623 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:37.187634 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:37.187645 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:37.187655 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:37.187666 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:37.187676 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:37.187751 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:37.187763 | orchestrator | 2025-07-12 13:29:37.187774 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-07-12 13:29:37.187785 | orchestrator | Saturday 12 July 2025 13:29:24 +0000 (0:00:00.516) 0:06:53.779 ********* 2025-07-12 13:29:37.187798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:29:37.187811 | orchestrator | 2025-07-12 13:29:37.187822 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-07-12 13:29:37.187833 | orchestrator | Saturday 12 July 2025 13:29:25 +0000 (0:00:00.848) 0:06:54.627 ********* 2025-07-12 13:29:37.187844 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:37.187855 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:37.187865 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:37.187876 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:37.187887 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:37.187898 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:37.187908 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:37.187919 | orchestrator | 2025-07-12 13:29:37.187930 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-07-12 13:29:37.187941 | orchestrator | Saturday 12 July 2025 13:29:26 +0000 (0:00:01.098) 0:06:55.726 ********* 2025-07-12 13:29:37.187952 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:37.187962 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:37.187973 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:37.187984 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:37.187994 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:37.188005 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:37.188016 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:37.188026 | orchestrator | 2025-07-12 13:29:37.188037 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-07-12 13:29:37.188048 | orchestrator | Saturday 12 July 2025 13:29:26 +0000 (0:00:00.838) 0:06:56.565 ********* 2025-07-12 13:29:37.188059 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:37.188070 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:37.188080 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:37.188091 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:37.188102 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:37.188112 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:37.188123 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:37.188134 | orchestrator | 2025-07-12 13:29:37.188145 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-07-12 13:29:37.188156 | orchestrator | Saturday 12 July 2025 13:29:27 +0000 (0:00:00.505) 0:06:57.070 ********* 2025-07-12 13:29:37.188167 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:37.188178 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:29:37.188189 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:29:37.188200 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:29:37.188219 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:29:37.188230 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:29:37.188240 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:29:37.188251 | orchestrator | 2025-07-12 13:29:37.188262 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-07-12 13:29:37.188273 | orchestrator | Saturday 12 July 2025 13:29:28 +0000 (0:00:01.485) 0:06:58.556 ********* 2025-07-12 13:29:37.188284 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:29:37.188294 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:29:37.188312 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:29:37.188323 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:29:37.188334 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:29:37.188344 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:29:37.188355 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:29:37.188366 | orchestrator | 2025-07-12 13:29:37.188377 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-07-12 13:29:37.188388 | orchestrator | Saturday 12 July 2025 13:29:29 +0000 (0:00:00.483) 0:06:59.040 ********* 2025-07-12 13:29:37.188399 | orchestrator | ok: [testbed-manager] 2025-07-12 13:29:37.188409 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:29:37.188420 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:29:37.188431 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:29:37.188441 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:29:37.188452 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:29:37.188463 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:29:37.188474 | orchestrator | 2025-07-12 13:29:37.188493 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-07-12 13:30:08.838246 | orchestrator | Saturday 12 July 2025 13:29:37 +0000 (0:00:07.723) 0:07:06.763 ********* 2025-07-12 13:30:08.838346 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.838363 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:08.838375 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:08.838387 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:08.838401 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:08.838421 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:08.838439 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:08.838458 | orchestrator | 2025-07-12 13:30:08.838477 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-07-12 13:30:08.838497 | orchestrator | Saturday 12 July 2025 13:29:38 +0000 (0:00:01.316) 0:07:08.079 ********* 2025-07-12 13:30:08.838557 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.838570 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:08.838582 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:08.838592 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:08.838604 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:08.838614 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:08.838631 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:08.838644 | orchestrator | 2025-07-12 13:30:08.838655 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-07-12 13:30:08.838667 | orchestrator | Saturday 12 July 2025 13:29:40 +0000 (0:00:01.843) 0:07:09.923 ********* 2025-07-12 13:30:08.838678 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.838692 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:08.838710 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:08.838729 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:08.838747 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:08.838767 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:08.838785 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:08.838797 | orchestrator | 2025-07-12 13:30:08.838811 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 13:30:08.838824 | orchestrator | Saturday 12 July 2025 13:29:41 +0000 (0:00:01.668) 0:07:11.591 ********* 2025-07-12 13:30:08.838836 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.838849 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:08.838884 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:08.838897 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:08.838908 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:08.838920 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:08.838932 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:08.838944 | orchestrator | 2025-07-12 13:30:08.838957 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 13:30:08.838969 | orchestrator | Saturday 12 July 2025 13:29:43 +0000 (0:00:01.070) 0:07:12.662 ********* 2025-07-12 13:30:08.838981 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:30:08.838994 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:30:08.839006 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:30:08.839019 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:30:08.839030 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:30:08.839043 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:30:08.839055 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:30:08.839068 | orchestrator | 2025-07-12 13:30:08.839080 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-07-12 13:30:08.839093 | orchestrator | Saturday 12 July 2025 13:29:43 +0000 (0:00:00.771) 0:07:13.433 ********* 2025-07-12 13:30:08.839105 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:30:08.839118 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:30:08.839130 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:30:08.839142 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:30:08.839154 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:30:08.839165 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:30:08.839176 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:30:08.839186 | orchestrator | 2025-07-12 13:30:08.839197 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-07-12 13:30:08.839208 | orchestrator | Saturday 12 July 2025 13:29:44 +0000 (0:00:00.530) 0:07:13.963 ********* 2025-07-12 13:30:08.839219 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.839229 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:08.839240 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:08.839251 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:08.839261 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:08.839272 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:08.839283 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:08.839293 | orchestrator | 2025-07-12 13:30:08.839304 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-07-12 13:30:08.839315 | orchestrator | Saturday 12 July 2025 13:29:45 +0000 (0:00:00.739) 0:07:14.703 ********* 2025-07-12 13:30:08.839326 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.839337 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:08.839347 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:08.839358 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:08.839369 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:08.839379 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:08.839390 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:08.839401 | orchestrator | 2025-07-12 13:30:08.839411 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-07-12 13:30:08.839422 | orchestrator | Saturday 12 July 2025 13:29:45 +0000 (0:00:00.584) 0:07:15.288 ********* 2025-07-12 13:30:08.839433 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.839444 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:08.839455 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:08.839466 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:08.839476 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:08.839487 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:08.839497 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:08.839508 | orchestrator | 2025-07-12 13:30:08.839566 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-07-12 13:30:08.839586 | orchestrator | Saturday 12 July 2025 13:29:46 +0000 (0:00:00.541) 0:07:15.829 ********* 2025-07-12 13:30:08.839601 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.839621 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:08.839632 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:08.839643 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:08.839653 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:08.839664 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:08.839674 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:08.839685 | orchestrator | 2025-07-12 13:30:08.839696 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-07-12 13:30:08.839725 | orchestrator | Saturday 12 July 2025 13:29:51 +0000 (0:00:05.704) 0:07:21.533 ********* 2025-07-12 13:30:08.839737 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:30:08.839748 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:30:08.839759 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:30:08.839770 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:30:08.839780 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:30:08.839791 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:30:08.839801 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:30:08.839812 | orchestrator | 2025-07-12 13:30:08.839823 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-07-12 13:30:08.839834 | orchestrator | Saturday 12 July 2025 13:29:52 +0000 (0:00:00.572) 0:07:22.106 ********* 2025-07-12 13:30:08.839846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:30:08.839859 | orchestrator | 2025-07-12 13:30:08.839870 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-07-12 13:30:08.839880 | orchestrator | Saturday 12 July 2025 13:29:53 +0000 (0:00:01.006) 0:07:23.113 ********* 2025-07-12 13:30:08.839891 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.839902 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:08.839913 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:08.839923 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:08.839934 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:08.839944 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:08.839955 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:08.839966 | orchestrator | 2025-07-12 13:30:08.839977 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-07-12 13:30:08.839988 | orchestrator | Saturday 12 July 2025 13:29:55 +0000 (0:00:01.911) 0:07:25.024 ********* 2025-07-12 13:30:08.839998 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.840009 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:08.840020 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:08.840030 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:08.840041 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:08.840051 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:08.840062 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:08.840072 | orchestrator | 2025-07-12 13:30:08.840083 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-07-12 13:30:08.840094 | orchestrator | Saturday 12 July 2025 13:29:56 +0000 (0:00:01.123) 0:07:26.147 ********* 2025-07-12 13:30:08.840105 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:08.840116 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:08.840126 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:08.840137 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:08.840147 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:08.840158 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:08.840168 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:08.840179 | orchestrator | 2025-07-12 13:30:08.840190 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-07-12 13:30:08.840201 | orchestrator | Saturday 12 July 2025 13:29:57 +0000 (0:00:01.058) 0:07:27.206 ********* 2025-07-12 13:30:08.840252 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:30:08.840266 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:30:08.840285 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:30:08.840296 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:30:08.840307 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:30:08.840318 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:30:08.840329 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 13:30:08.840340 | orchestrator | 2025-07-12 13:30:08.840350 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-07-12 13:30:08.840361 | orchestrator | Saturday 12 July 2025 13:29:59 +0000 (0:00:01.695) 0:07:28.902 ********* 2025-07-12 13:30:08.840377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:30:08.840388 | orchestrator | 2025-07-12 13:30:08.840399 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-07-12 13:30:08.840410 | orchestrator | Saturday 12 July 2025 13:30:00 +0000 (0:00:00.775) 0:07:29.677 ********* 2025-07-12 13:30:08.840421 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:08.840431 | orchestrator | changed: [testbed-manager] 2025-07-12 13:30:08.840442 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:08.840453 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:08.840463 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:08.840474 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:08.840484 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:08.840495 | orchestrator | 2025-07-12 13:30:08.840506 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-07-12 13:30:08.840548 | orchestrator | Saturday 12 July 2025 13:30:08 +0000 (0:00:08.738) 0:07:38.416 ********* 2025-07-12 13:30:24.823696 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:24.823811 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:24.823826 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:24.823838 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:24.823849 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:24.823860 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:24.823871 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:24.823883 | orchestrator | 2025-07-12 13:30:24.823895 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-07-12 13:30:24.823908 | orchestrator | Saturday 12 July 2025 13:30:10 +0000 (0:00:01.728) 0:07:40.145 ********* 2025-07-12 13:30:24.823919 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:24.823931 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:24.823942 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:24.823953 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:24.823964 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:24.823975 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:24.823986 | orchestrator | 2025-07-12 13:30:24.823998 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-07-12 13:30:24.824009 | orchestrator | Saturday 12 July 2025 13:30:11 +0000 (0:00:01.315) 0:07:41.460 ********* 2025-07-12 13:30:24.824021 | orchestrator | changed: [testbed-manager] 2025-07-12 13:30:24.824034 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:24.824045 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:24.824056 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:24.824087 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:24.824098 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:24.824108 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:24.824119 | orchestrator | 2025-07-12 13:30:24.824130 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-07-12 13:30:24.824141 | orchestrator | 2025-07-12 13:30:24.824151 | orchestrator | TASK [Include hardening role] ************************************************** 2025-07-12 13:30:24.824162 | orchestrator | Saturday 12 July 2025 13:30:13 +0000 (0:00:01.457) 0:07:42.917 ********* 2025-07-12 13:30:24.824173 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:30:24.824198 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:30:24.824209 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:30:24.824219 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:30:24.824230 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:30:24.824242 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:30:24.824254 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:30:24.824266 | orchestrator | 2025-07-12 13:30:24.824278 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-07-12 13:30:24.824291 | orchestrator | 2025-07-12 13:30:24.824303 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-07-12 13:30:24.824315 | orchestrator | Saturday 12 July 2025 13:30:13 +0000 (0:00:00.527) 0:07:43.445 ********* 2025-07-12 13:30:24.824328 | orchestrator | changed: [testbed-manager] 2025-07-12 13:30:24.824339 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:24.824351 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:24.824362 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:24.824375 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:24.824387 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:24.824399 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:24.824411 | orchestrator | 2025-07-12 13:30:24.824423 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-07-12 13:30:24.824435 | orchestrator | Saturday 12 July 2025 13:30:15 +0000 (0:00:01.360) 0:07:44.805 ********* 2025-07-12 13:30:24.824448 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:24.824459 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:24.824472 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:24.824483 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:24.824496 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:24.824508 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:24.824568 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:24.824592 | orchestrator | 2025-07-12 13:30:24.824610 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-07-12 13:30:24.824627 | orchestrator | Saturday 12 July 2025 13:30:16 +0000 (0:00:01.403) 0:07:46.209 ********* 2025-07-12 13:30:24.824638 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:30:24.824649 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:30:24.824659 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:30:24.824670 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:30:24.824680 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:30:24.824690 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:30:24.824701 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:30:24.824711 | orchestrator | 2025-07-12 13:30:24.824722 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-07-12 13:30:24.824733 | orchestrator | Saturday 12 July 2025 13:30:17 +0000 (0:00:00.965) 0:07:47.175 ********* 2025-07-12 13:30:24.824744 | orchestrator | changed: [testbed-manager] 2025-07-12 13:30:24.824754 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:24.824765 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:24.824775 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:24.824785 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:24.824796 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:24.824806 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:24.824816 | orchestrator | 2025-07-12 13:30:24.824827 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-07-12 13:30:24.824845 | orchestrator | 2025-07-12 13:30:24.824855 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-07-12 13:30:24.824867 | orchestrator | Saturday 12 July 2025 13:30:18 +0000 (0:00:01.218) 0:07:48.393 ********* 2025-07-12 13:30:24.824878 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:30:24.824890 | orchestrator | 2025-07-12 13:30:24.824900 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 13:30:24.824911 | orchestrator | Saturday 12 July 2025 13:30:19 +0000 (0:00:00.976) 0:07:49.369 ********* 2025-07-12 13:30:24.824921 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:24.824932 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:24.824942 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:24.824953 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:24.824963 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:24.824973 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:24.824984 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:24.824994 | orchestrator | 2025-07-12 13:30:24.825026 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 13:30:24.825045 | orchestrator | Saturday 12 July 2025 13:30:20 +0000 (0:00:00.828) 0:07:50.197 ********* 2025-07-12 13:30:24.825063 | orchestrator | changed: [testbed-manager] 2025-07-12 13:30:24.825080 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:24.825098 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:24.825112 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:24.825122 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:24.825133 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:24.825143 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:24.825154 | orchestrator | 2025-07-12 13:30:24.825165 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-07-12 13:30:24.825175 | orchestrator | Saturday 12 July 2025 13:30:21 +0000 (0:00:01.167) 0:07:51.365 ********* 2025-07-12 13:30:24.825186 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:30:24.825197 | orchestrator | 2025-07-12 13:30:24.825207 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 13:30:24.825218 | orchestrator | Saturday 12 July 2025 13:30:22 +0000 (0:00:01.043) 0:07:52.408 ********* 2025-07-12 13:30:24.825228 | orchestrator | ok: [testbed-manager] 2025-07-12 13:30:24.825239 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:30:24.825249 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:30:24.825259 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:30:24.825270 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:30:24.825280 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:30:24.825291 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:30:24.825301 | orchestrator | 2025-07-12 13:30:24.825312 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 13:30:24.825322 | orchestrator | Saturday 12 July 2025 13:30:23 +0000 (0:00:00.828) 0:07:53.237 ********* 2025-07-12 13:30:24.825333 | orchestrator | changed: [testbed-manager] 2025-07-12 13:30:24.825343 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:30:24.825354 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:30:24.825365 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:30:24.825375 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:30:24.825385 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:30:24.825396 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:30:24.825406 | orchestrator | 2025-07-12 13:30:24.825417 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:30:24.825428 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-07-12 13:30:24.825440 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-12 13:30:24.825462 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:30:24.825473 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:30:24.825484 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:30:24.825495 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:30:24.825505 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 13:30:24.825516 | orchestrator | 2025-07-12 13:30:24.825553 | orchestrator | 2025-07-12 13:30:24.825564 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:30:24.825575 | orchestrator | Saturday 12 July 2025 13:30:24 +0000 (0:00:01.156) 0:07:54.394 ********* 2025-07-12 13:30:24.825586 | orchestrator | =============================================================================== 2025-07-12 13:30:24.825597 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.78s 2025-07-12 13:30:24.825607 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.66s 2025-07-12 13:30:24.825618 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.57s 2025-07-12 13:30:24.825628 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.96s 2025-07-12 13:30:24.825639 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.74s 2025-07-12 13:30:24.825658 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.24s 2025-07-12 13:30:24.825671 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.86s 2025-07-12 13:30:24.825681 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.18s 2025-07-12 13:30:24.825692 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.49s 2025-07-12 13:30:24.825702 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.74s 2025-07-12 13:30:24.825713 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.50s 2025-07-12 13:30:24.825723 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.83s 2025-07-12 13:30:24.825734 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.81s 2025-07-12 13:30:24.825745 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.76s 2025-07-12 13:30:24.825763 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.72s 2025-07-12 13:30:25.289785 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.41s 2025-07-12 13:30:25.289887 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.56s 2025-07-12 13:30:25.289902 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.93s 2025-07-12 13:30:25.289914 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.86s 2025-07-12 13:30:25.289925 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.80s 2025-07-12 13:30:25.571877 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 13:30:25.571972 | orchestrator | + osism apply network 2025-07-12 13:30:38.130815 | orchestrator | 2025-07-12 13:30:38 | INFO  | Task 1fc3d302-8b38-423f-9ab7-51394497f405 (network) was prepared for execution. 2025-07-12 13:30:38.130928 | orchestrator | 2025-07-12 13:30:38 | INFO  | It takes a moment until task 1fc3d302-8b38-423f-9ab7-51394497f405 (network) has been started and output is visible here. 2025-07-12 13:31:06.839939 | orchestrator | 2025-07-12 13:31:06.840053 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-07-12 13:31:06.840070 | orchestrator | 2025-07-12 13:31:06.840081 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-07-12 13:31:06.840093 | orchestrator | Saturday 12 July 2025 13:30:42 +0000 (0:00:00.315) 0:00:00.315 ********* 2025-07-12 13:31:06.840104 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:06.840116 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:31:06.840127 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:31:06.840138 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:31:06.840148 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:31:06.840159 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:31:06.840169 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:31:06.840180 | orchestrator | 2025-07-12 13:31:06.840191 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-07-12 13:31:06.840202 | orchestrator | Saturday 12 July 2025 13:30:43 +0000 (0:00:00.720) 0:00:01.036 ********* 2025-07-12 13:31:06.840214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:31:06.840227 | orchestrator | 2025-07-12 13:31:06.840238 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-07-12 13:31:06.840249 | orchestrator | Saturday 12 July 2025 13:30:44 +0000 (0:00:01.183) 0:00:02.219 ********* 2025-07-12 13:31:06.840260 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:06.840271 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:31:06.840282 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:31:06.840293 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:31:06.840304 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:31:06.840314 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:31:06.840325 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:31:06.840335 | orchestrator | 2025-07-12 13:31:06.840346 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-07-12 13:31:06.840357 | orchestrator | Saturday 12 July 2025 13:30:46 +0000 (0:00:02.081) 0:00:04.300 ********* 2025-07-12 13:31:06.840368 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:06.840379 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:31:06.840389 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:31:06.840400 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:31:06.840411 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:31:06.840421 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:31:06.840432 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:31:06.840442 | orchestrator | 2025-07-12 13:31:06.840453 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-07-12 13:31:06.840466 | orchestrator | Saturday 12 July 2025 13:30:48 +0000 (0:00:01.749) 0:00:06.050 ********* 2025-07-12 13:31:06.840479 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-07-12 13:31:06.840492 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-07-12 13:31:06.840504 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-07-12 13:31:06.840516 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-07-12 13:31:06.840528 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-07-12 13:31:06.840569 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-07-12 13:31:06.840590 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-07-12 13:31:06.840610 | orchestrator | 2025-07-12 13:31:06.840688 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-07-12 13:31:06.840703 | orchestrator | Saturday 12 July 2025 13:30:49 +0000 (0:00:01.022) 0:00:07.073 ********* 2025-07-12 13:31:06.840717 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 13:31:06.840730 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:31:06.840742 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 13:31:06.840778 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 13:31:06.840805 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:31:06.840818 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 13:31:06.840830 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 13:31:06.840874 | orchestrator | 2025-07-12 13:31:06.840885 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-07-12 13:31:06.840896 | orchestrator | Saturday 12 July 2025 13:30:52 +0000 (0:00:03.278) 0:00:10.351 ********* 2025-07-12 13:31:06.840907 | orchestrator | changed: [testbed-manager] 2025-07-12 13:31:06.840917 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:31:06.840928 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:31:06.840938 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:31:06.840949 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:31:06.840959 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:31:06.840970 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:31:06.840980 | orchestrator | 2025-07-12 13:31:06.840991 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-07-12 13:31:06.841002 | orchestrator | Saturday 12 July 2025 13:30:53 +0000 (0:00:01.441) 0:00:11.793 ********* 2025-07-12 13:31:06.841013 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:31:06.841023 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 13:31:06.841034 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:31:06.841045 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 13:31:06.841056 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 13:31:06.841066 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 13:31:06.841077 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 13:31:06.841087 | orchestrator | 2025-07-12 13:31:06.841098 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-07-12 13:31:06.841109 | orchestrator | Saturday 12 July 2025 13:30:55 +0000 (0:00:01.941) 0:00:13.735 ********* 2025-07-12 13:31:06.841119 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:06.841130 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:31:06.841141 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:31:06.841151 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:31:06.841162 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:31:06.841172 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:31:06.841182 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:31:06.841193 | orchestrator | 2025-07-12 13:31:06.841204 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-07-12 13:31:06.841233 | orchestrator | Saturday 12 July 2025 13:30:56 +0000 (0:00:01.081) 0:00:14.817 ********* 2025-07-12 13:31:06.841245 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:31:06.841256 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:31:06.841266 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:31:06.841277 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:31:06.841287 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:31:06.841298 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:31:06.841309 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:31:06.841319 | orchestrator | 2025-07-12 13:31:06.841330 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-07-12 13:31:06.841341 | orchestrator | Saturday 12 July 2025 13:30:57 +0000 (0:00:00.641) 0:00:15.459 ********* 2025-07-12 13:31:06.841351 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:06.841362 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:31:06.841373 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:31:06.841383 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:31:06.841394 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:31:06.841405 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:31:06.841415 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:31:06.841426 | orchestrator | 2025-07-12 13:31:06.841437 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-07-12 13:31:06.841447 | orchestrator | Saturday 12 July 2025 13:30:59 +0000 (0:00:02.282) 0:00:17.741 ********* 2025-07-12 13:31:06.841468 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:31:06.841479 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:31:06.841490 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:31:06.841500 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:31:06.841511 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:31:06.841522 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:31:06.841573 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-07-12 13:31:06.841590 | orchestrator | 2025-07-12 13:31:06.841602 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-07-12 13:31:06.841612 | orchestrator | Saturday 12 July 2025 13:31:00 +0000 (0:00:00.918) 0:00:18.659 ********* 2025-07-12 13:31:06.841623 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:06.841634 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:31:06.841645 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:31:06.841655 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:31:06.841666 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:31:06.841677 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:31:06.841687 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:31:06.841698 | orchestrator | 2025-07-12 13:31:06.841709 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-07-12 13:31:06.841720 | orchestrator | Saturday 12 July 2025 13:31:02 +0000 (0:00:01.617) 0:00:20.277 ********* 2025-07-12 13:31:06.841731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:31:06.841744 | orchestrator | 2025-07-12 13:31:06.841755 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 13:31:06.841765 | orchestrator | Saturday 12 July 2025 13:31:03 +0000 (0:00:01.330) 0:00:21.608 ********* 2025-07-12 13:31:06.841776 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:06.841787 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:31:06.841797 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:31:06.841808 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:31:06.841818 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:31:06.841829 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:31:06.841840 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:31:06.841850 | orchestrator | 2025-07-12 13:31:06.841861 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-07-12 13:31:06.841878 | orchestrator | Saturday 12 July 2025 13:31:04 +0000 (0:00:01.000) 0:00:22.608 ********* 2025-07-12 13:31:06.841889 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:06.841900 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:31:06.841911 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:31:06.841921 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:31:06.841932 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:31:06.841942 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:31:06.841953 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:31:06.841963 | orchestrator | 2025-07-12 13:31:06.841974 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 13:31:06.841985 | orchestrator | Saturday 12 July 2025 13:31:05 +0000 (0:00:00.885) 0:00:23.493 ********* 2025-07-12 13:31:06.841996 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:31:06.842006 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:31:06.842070 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:31:06.842085 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:31:06.842096 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:31:06.842106 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:31:06.842117 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:31:06.842139 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:31:06.842150 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:31:06.842160 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:31:06.842171 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:31:06.842182 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:31:06.842193 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 13:31:06.842203 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 13:31:06.842214 | orchestrator | 2025-07-12 13:31:06.842235 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-07-12 13:31:24.297104 | orchestrator | Saturday 12 July 2025 13:31:06 +0000 (0:00:01.241) 0:00:24.735 ********* 2025-07-12 13:31:24.297227 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:31:24.297250 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:31:24.297271 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:31:24.297283 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:31:24.297294 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:31:24.297305 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:31:24.297315 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:31:24.297327 | orchestrator | 2025-07-12 13:31:24.297348 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-07-12 13:31:24.297367 | orchestrator | Saturday 12 July 2025 13:31:07 +0000 (0:00:00.628) 0:00:25.363 ********* 2025-07-12 13:31:24.297387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-4, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5 2025-07-12 13:31:24.297408 | orchestrator | 2025-07-12 13:31:24.297422 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-07-12 13:31:24.297441 | orchestrator | Saturday 12 July 2025 13:31:12 +0000 (0:00:04.680) 0:00:30.044 ********* 2025-07-12 13:31:24.297459 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297578 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297731 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297742 | orchestrator | 2025-07-12 13:31:24.297753 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-07-12 13:31:24.297764 | orchestrator | Saturday 12 July 2025 13:31:17 +0000 (0:00:05.853) 0:00:35.897 ********* 2025-07-12 13:31:24.297775 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297849 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297919 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 13:31:24.297947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.297978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:24.298009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:30.263810 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 13:31:30.263911 | orchestrator | 2025-07-12 13:31:30.263925 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-07-12 13:31:30.263935 | orchestrator | Saturday 12 July 2025 13:31:24 +0000 (0:00:06.293) 0:00:42.191 ********* 2025-07-12 13:31:30.263945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:31:30.263955 | orchestrator | 2025-07-12 13:31:30.263963 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 13:31:30.263972 | orchestrator | Saturday 12 July 2025 13:31:25 +0000 (0:00:01.093) 0:00:43.285 ********* 2025-07-12 13:31:30.263981 | orchestrator | ok: [testbed-manager] 2025-07-12 13:31:30.263991 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:31:30.263999 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:31:30.264008 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:31:30.264016 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:31:30.264025 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:31:30.264033 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:31:30.264042 | orchestrator | 2025-07-12 13:31:30.264050 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 13:31:30.264059 | orchestrator | Saturday 12 July 2025 13:31:26 +0000 (0:00:01.115) 0:00:44.400 ********* 2025-07-12 13:31:30.264091 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:30.264101 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:30.264110 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:30.264119 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:30.264128 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:31:30.264137 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:30.264146 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:30.264155 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:30.264165 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:30.264173 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:31:30.264182 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:30.264191 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:30.264200 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:30.264208 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:30.264229 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:30.264238 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:30.264247 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:30.264256 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:30.264265 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:31:30.264273 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:30.264282 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:30.264291 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:30.264300 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:30.264308 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:31:30.264317 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:30.264326 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:30.264335 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:30.264343 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:30.264353 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:31:30.264363 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:31:30.264373 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 13:31:30.264383 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 13:31:30.264392 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 13:31:30.264402 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 13:31:30.264412 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:31:30.264422 | orchestrator | 2025-07-12 13:31:30.264432 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-07-12 13:31:30.264459 | orchestrator | Saturday 12 July 2025 13:31:28 +0000 (0:00:01.992) 0:00:46.393 ********* 2025-07-12 13:31:30.264469 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:31:30.264487 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:31:30.264497 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:31:30.264507 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:31:30.264517 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:31:30.264527 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:31:30.264558 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:31:30.264568 | orchestrator | 2025-07-12 13:31:30.264578 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-07-12 13:31:30.264588 | orchestrator | Saturday 12 July 2025 13:31:29 +0000 (0:00:00.680) 0:00:47.073 ********* 2025-07-12 13:31:30.264597 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:31:30.264607 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:31:30.264616 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:31:30.264626 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:31:30.264635 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:31:30.264644 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:31:30.264654 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:31:30.264663 | orchestrator | 2025-07-12 13:31:30.264673 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:31:30.264683 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:31:30.264695 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:30.264705 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:30.264714 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:30.264723 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:30.264732 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:30.264740 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 13:31:30.264749 | orchestrator | 2025-07-12 13:31:30.264758 | orchestrator | 2025-07-12 13:31:30.264766 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:31:30.264775 | orchestrator | Saturday 12 July 2025 13:31:29 +0000 (0:00:00.724) 0:00:47.798 ********* 2025-07-12 13:31:30.264784 | orchestrator | =============================================================================== 2025-07-12 13:31:30.264792 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.29s 2025-07-12 13:31:30.264801 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.85s 2025-07-12 13:31:30.264815 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.68s 2025-07-12 13:31:30.264824 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.28s 2025-07-12 13:31:30.264833 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.28s 2025-07-12 13:31:30.264841 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.08s 2025-07-12 13:31:30.264850 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.99s 2025-07-12 13:31:30.264859 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.94s 2025-07-12 13:31:30.264867 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.75s 2025-07-12 13:31:30.264876 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2025-07-12 13:31:30.264884 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.44s 2025-07-12 13:31:30.264899 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.33s 2025-07-12 13:31:30.264907 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.24s 2025-07-12 13:31:30.264916 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2025-07-12 13:31:30.264925 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.12s 2025-07-12 13:31:30.264933 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2025-07-12 13:31:30.264942 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.08s 2025-07-12 13:31:30.264950 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2025-07-12 13:31:30.264959 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2025-07-12 13:31:30.264967 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.92s 2025-07-12 13:31:30.540972 | orchestrator | + osism apply wireguard 2025-07-12 13:31:42.428432 | orchestrator | 2025-07-12 13:31:42 | INFO  | Task 622e90ea-d941-456d-878d-48327fd86d8a (wireguard) was prepared for execution. 2025-07-12 13:31:42.428615 | orchestrator | 2025-07-12 13:31:42 | INFO  | It takes a moment until task 622e90ea-d941-456d-878d-48327fd86d8a (wireguard) has been started and output is visible here. 2025-07-12 13:32:01.839519 | orchestrator | 2025-07-12 13:32:01.839677 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-07-12 13:32:01.839704 | orchestrator | 2025-07-12 13:32:01.839725 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-07-12 13:32:01.839746 | orchestrator | Saturday 12 July 2025 13:31:46 +0000 (0:00:00.226) 0:00:00.226 ********* 2025-07-12 13:32:01.839765 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:01.839779 | orchestrator | 2025-07-12 13:32:01.839790 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-07-12 13:32:01.839801 | orchestrator | Saturday 12 July 2025 13:31:47 +0000 (0:00:01.547) 0:00:01.773 ********* 2025-07-12 13:32:01.839812 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:01.839823 | orchestrator | 2025-07-12 13:32:01.839834 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-07-12 13:32:01.839845 | orchestrator | Saturday 12 July 2025 13:31:54 +0000 (0:00:06.283) 0:00:08.056 ********* 2025-07-12 13:32:01.839856 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:01.839866 | orchestrator | 2025-07-12 13:32:01.839877 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-07-12 13:32:01.839888 | orchestrator | Saturday 12 July 2025 13:31:54 +0000 (0:00:00.557) 0:00:08.614 ********* 2025-07-12 13:32:01.839899 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:01.839909 | orchestrator | 2025-07-12 13:32:01.839920 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-07-12 13:32:01.839931 | orchestrator | Saturday 12 July 2025 13:31:55 +0000 (0:00:00.429) 0:00:09.043 ********* 2025-07-12 13:32:01.839942 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:01.839952 | orchestrator | 2025-07-12 13:32:01.839963 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-07-12 13:32:01.839974 | orchestrator | Saturday 12 July 2025 13:31:55 +0000 (0:00:00.524) 0:00:09.568 ********* 2025-07-12 13:32:01.839985 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:01.839996 | orchestrator | 2025-07-12 13:32:01.840006 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-07-12 13:32:01.840017 | orchestrator | Saturday 12 July 2025 13:31:56 +0000 (0:00:00.544) 0:00:10.112 ********* 2025-07-12 13:32:01.840028 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:01.840039 | orchestrator | 2025-07-12 13:32:01.840050 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-07-12 13:32:01.840061 | orchestrator | Saturday 12 July 2025 13:31:56 +0000 (0:00:00.437) 0:00:10.550 ********* 2025-07-12 13:32:01.840104 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:01.840116 | orchestrator | 2025-07-12 13:32:01.840127 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-07-12 13:32:01.840138 | orchestrator | Saturday 12 July 2025 13:31:57 +0000 (0:00:01.180) 0:00:11.730 ********* 2025-07-12 13:32:01.840148 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 13:32:01.840159 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:01.840170 | orchestrator | 2025-07-12 13:32:01.840181 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-07-12 13:32:01.840191 | orchestrator | Saturday 12 July 2025 13:31:58 +0000 (0:00:00.930) 0:00:12.661 ********* 2025-07-12 13:32:01.840202 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:01.840212 | orchestrator | 2025-07-12 13:32:01.840223 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-07-12 13:32:01.840234 | orchestrator | Saturday 12 July 2025 13:32:00 +0000 (0:00:01.685) 0:00:14.346 ********* 2025-07-12 13:32:01.840244 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:01.840255 | orchestrator | 2025-07-12 13:32:01.840280 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:32:01.840292 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:32:01.840304 | orchestrator | 2025-07-12 13:32:01.840314 | orchestrator | 2025-07-12 13:32:01.840325 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:32:01.840336 | orchestrator | Saturday 12 July 2025 13:32:01 +0000 (0:00:00.942) 0:00:15.288 ********* 2025-07-12 13:32:01.840347 | orchestrator | =============================================================================== 2025-07-12 13:32:01.840358 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.28s 2025-07-12 13:32:01.840369 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-07-12 13:32:01.840379 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2025-07-12 13:32:01.840390 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2025-07-12 13:32:01.840401 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-07-12 13:32:01.840411 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2025-07-12 13:32:01.840422 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-07-12 13:32:01.840433 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.54s 2025-07-12 13:32:01.840443 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-07-12 13:32:01.840454 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-07-12 13:32:01.840465 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-07-12 13:32:02.123385 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-07-12 13:32:02.159146 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-07-12 13:32:02.159223 | orchestrator | Dload Upload Total Spent Left Speed 2025-07-12 13:32:02.251533 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 13 100 13 0 0 140 0 --:--:-- --:--:-- --:--:-- 139 2025-07-12 13:32:02.262864 | orchestrator | + osism apply --environment custom workarounds 2025-07-12 13:32:04.106339 | orchestrator | 2025-07-12 13:32:04 | INFO  | Trying to run play workarounds in environment custom 2025-07-12 13:32:14.171120 | orchestrator | 2025-07-12 13:32:14 | INFO  | Task 058e78fe-1660-47eb-9b4f-4244664a085e (workarounds) was prepared for execution. 2025-07-12 13:32:14.171227 | orchestrator | 2025-07-12 13:32:14 | INFO  | It takes a moment until task 058e78fe-1660-47eb-9b4f-4244664a085e (workarounds) has been started and output is visible here. 2025-07-12 13:32:39.144795 | orchestrator | 2025-07-12 13:32:39.144914 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:32:39.144931 | orchestrator | 2025-07-12 13:32:39.144943 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-07-12 13:32:39.144954 | orchestrator | Saturday 12 July 2025 13:32:18 +0000 (0:00:00.189) 0:00:00.189 ********* 2025-07-12 13:32:39.144966 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-07-12 13:32:39.144977 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-07-12 13:32:39.144988 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-07-12 13:32:39.144999 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-07-12 13:32:39.145011 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-07-12 13:32:39.145022 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-07-12 13:32:39.145033 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-07-12 13:32:39.145043 | orchestrator | 2025-07-12 13:32:39.145055 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-07-12 13:32:39.145065 | orchestrator | 2025-07-12 13:32:39.145076 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 13:32:39.145087 | orchestrator | Saturday 12 July 2025 13:32:18 +0000 (0:00:00.772) 0:00:00.962 ********* 2025-07-12 13:32:39.145098 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:39.145110 | orchestrator | 2025-07-12 13:32:39.145121 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-07-12 13:32:39.145133 | orchestrator | 2025-07-12 13:32:39.145143 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 13:32:39.145154 | orchestrator | Saturday 12 July 2025 13:32:21 +0000 (0:00:02.358) 0:00:03.321 ********* 2025-07-12 13:32:39.145165 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:32:39.145176 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:32:39.145187 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:32:39.145198 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:32:39.145208 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:32:39.145219 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:32:39.145230 | orchestrator | 2025-07-12 13:32:39.145241 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-07-12 13:32:39.145252 | orchestrator | 2025-07-12 13:32:39.145263 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-07-12 13:32:39.145274 | orchestrator | Saturday 12 July 2025 13:32:23 +0000 (0:00:01.923) 0:00:05.244 ********* 2025-07-12 13:32:39.145285 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:39.145314 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:39.145326 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:39.145337 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:39.145351 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:39.145363 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 13:32:39.145375 | orchestrator | 2025-07-12 13:32:39.145388 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-07-12 13:32:39.145401 | orchestrator | Saturday 12 July 2025 13:32:24 +0000 (0:00:01.534) 0:00:06.779 ********* 2025-07-12 13:32:39.145413 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:32:39.145425 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:32:39.145437 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:32:39.145477 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:32:39.145489 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:32:39.145501 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:32:39.145513 | orchestrator | 2025-07-12 13:32:39.145525 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-07-12 13:32:39.145537 | orchestrator | Saturday 12 July 2025 13:32:28 +0000 (0:00:03.762) 0:00:10.541 ********* 2025-07-12 13:32:39.145549 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:32:39.145589 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:32:39.145611 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:32:39.145630 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:32:39.145649 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:32:39.145662 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:32:39.145673 | orchestrator | 2025-07-12 13:32:39.145685 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-07-12 13:32:39.145697 | orchestrator | 2025-07-12 13:32:39.145707 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-07-12 13:32:39.145718 | orchestrator | Saturday 12 July 2025 13:32:29 +0000 (0:00:00.691) 0:00:11.233 ********* 2025-07-12 13:32:39.145729 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:39.145739 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:32:39.145749 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:32:39.145760 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:32:39.145770 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:32:39.145781 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:32:39.145791 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:32:39.145802 | orchestrator | 2025-07-12 13:32:39.145813 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-07-12 13:32:39.145823 | orchestrator | Saturday 12 July 2025 13:32:30 +0000 (0:00:01.624) 0:00:12.857 ********* 2025-07-12 13:32:39.145834 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:39.145845 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:32:39.145855 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:32:39.145866 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:32:39.145876 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:32:39.145887 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:32:39.145915 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:32:39.145927 | orchestrator | 2025-07-12 13:32:39.145937 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-07-12 13:32:39.145948 | orchestrator | Saturday 12 July 2025 13:32:32 +0000 (0:00:01.609) 0:00:14.467 ********* 2025-07-12 13:32:39.145959 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:32:39.145970 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:32:39.145980 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:32:39.145991 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:32:39.146001 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:32:39.146012 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:32:39.146071 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:39.146083 | orchestrator | 2025-07-12 13:32:39.146094 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-07-12 13:32:39.146105 | orchestrator | Saturday 12 July 2025 13:32:33 +0000 (0:00:01.555) 0:00:16.022 ********* 2025-07-12 13:32:39.146115 | orchestrator | changed: [testbed-manager] 2025-07-12 13:32:39.146126 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:32:39.146137 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:32:39.146147 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:32:39.146158 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:32:39.146169 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:32:39.146179 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:32:39.146190 | orchestrator | 2025-07-12 13:32:39.146200 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-07-12 13:32:39.146211 | orchestrator | Saturday 12 July 2025 13:32:35 +0000 (0:00:01.823) 0:00:17.846 ********* 2025-07-12 13:32:39.146221 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:32:39.146243 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:32:39.146253 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:32:39.146264 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:32:39.146274 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:32:39.146285 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:32:39.146295 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:32:39.146306 | orchestrator | 2025-07-12 13:32:39.146316 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-07-12 13:32:39.146327 | orchestrator | 2025-07-12 13:32:39.146337 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-07-12 13:32:39.146348 | orchestrator | Saturday 12 July 2025 13:32:36 +0000 (0:00:00.643) 0:00:18.490 ********* 2025-07-12 13:32:39.146359 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:32:39.146369 | orchestrator | ok: [testbed-manager] 2025-07-12 13:32:39.146380 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:32:39.146390 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:32:39.146401 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:32:39.146411 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:32:39.146422 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:32:39.146432 | orchestrator | 2025-07-12 13:32:39.146443 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:32:39.146456 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:32:39.146468 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:39.146479 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:39.146490 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:39.146500 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:39.146511 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:39.146522 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:32:39.146532 | orchestrator | 2025-07-12 13:32:39.146543 | orchestrator | 2025-07-12 13:32:39.146554 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:32:39.146592 | orchestrator | Saturday 12 July 2025 13:32:39 +0000 (0:00:02.668) 0:00:21.158 ********* 2025-07-12 13:32:39.146605 | orchestrator | =============================================================================== 2025-07-12 13:32:39.146615 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.76s 2025-07-12 13:32:39.146626 | orchestrator | Install python3-docker -------------------------------------------------- 2.67s 2025-07-12 13:32:39.146636 | orchestrator | Apply netplan configuration --------------------------------------------- 2.36s 2025-07-12 13:32:39.146647 | orchestrator | Apply netplan configuration --------------------------------------------- 1.92s 2025-07-12 13:32:39.146657 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.82s 2025-07-12 13:32:39.146668 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.62s 2025-07-12 13:32:39.146679 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2025-07-12 13:32:39.146689 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.56s 2025-07-12 13:32:39.146700 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2025-07-12 13:32:39.146719 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-07-12 13:32:39.146730 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-07-12 13:32:39.146748 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2025-07-12 13:32:39.784113 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-07-12 13:32:51.704990 | orchestrator | 2025-07-12 13:32:51 | INFO  | Task 4a1368bd-7b5a-4d31-ace0-af928132adbf (reboot) was prepared for execution. 2025-07-12 13:32:51.705102 | orchestrator | 2025-07-12 13:32:51 | INFO  | It takes a moment until task 4a1368bd-7b5a-4d31-ace0-af928132adbf (reboot) has been started and output is visible here. 2025-07-12 13:33:01.922769 | orchestrator | 2025-07-12 13:33:01.922886 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:33:01.922904 | orchestrator | 2025-07-12 13:33:01.922916 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:33:01.922928 | orchestrator | Saturday 12 July 2025 13:32:55 +0000 (0:00:00.220) 0:00:00.220 ********* 2025-07-12 13:33:01.922939 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:33:01.922951 | orchestrator | 2025-07-12 13:33:01.922980 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:33:01.922992 | orchestrator | Saturday 12 July 2025 13:32:55 +0000 (0:00:00.114) 0:00:00.335 ********* 2025-07-12 13:33:01.923003 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:33:01.923014 | orchestrator | 2025-07-12 13:33:01.923025 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:33:01.923036 | orchestrator | Saturday 12 July 2025 13:32:56 +0000 (0:00:00.974) 0:00:01.310 ********* 2025-07-12 13:33:01.923047 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:33:01.923057 | orchestrator | 2025-07-12 13:33:01.923068 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:33:01.923079 | orchestrator | 2025-07-12 13:33:01.923090 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:33:01.923101 | orchestrator | Saturday 12 July 2025 13:32:56 +0000 (0:00:00.109) 0:00:01.419 ********* 2025-07-12 13:33:01.923112 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:33:01.923122 | orchestrator | 2025-07-12 13:33:01.923133 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:33:01.923144 | orchestrator | Saturday 12 July 2025 13:32:57 +0000 (0:00:00.101) 0:00:01.521 ********* 2025-07-12 13:33:01.923155 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:33:01.923165 | orchestrator | 2025-07-12 13:33:01.923176 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:33:01.923187 | orchestrator | Saturday 12 July 2025 13:32:57 +0000 (0:00:00.642) 0:00:02.164 ********* 2025-07-12 13:33:01.923198 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:33:01.923209 | orchestrator | 2025-07-12 13:33:01.923224 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:33:01.923235 | orchestrator | 2025-07-12 13:33:01.923246 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:33:01.923257 | orchestrator | Saturday 12 July 2025 13:32:57 +0000 (0:00:00.144) 0:00:02.309 ********* 2025-07-12 13:33:01.923267 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:33:01.923278 | orchestrator | 2025-07-12 13:33:01.923288 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:33:01.923299 | orchestrator | Saturday 12 July 2025 13:32:58 +0000 (0:00:00.208) 0:00:02.517 ********* 2025-07-12 13:33:01.923312 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:33:01.923324 | orchestrator | 2025-07-12 13:33:01.923335 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:33:01.923348 | orchestrator | Saturday 12 July 2025 13:32:58 +0000 (0:00:00.664) 0:00:03.182 ********* 2025-07-12 13:33:01.923359 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:33:01.923372 | orchestrator | 2025-07-12 13:33:01.923407 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:33:01.923420 | orchestrator | 2025-07-12 13:33:01.923432 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:33:01.923444 | orchestrator | Saturday 12 July 2025 13:32:58 +0000 (0:00:00.129) 0:00:03.311 ********* 2025-07-12 13:33:01.923457 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:33:01.923469 | orchestrator | 2025-07-12 13:33:01.923480 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:33:01.923493 | orchestrator | Saturday 12 July 2025 13:32:58 +0000 (0:00:00.109) 0:00:03.421 ********* 2025-07-12 13:33:01.923504 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:33:01.923517 | orchestrator | 2025-07-12 13:33:01.923530 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:33:01.923542 | orchestrator | Saturday 12 July 2025 13:32:59 +0000 (0:00:00.698) 0:00:04.119 ********* 2025-07-12 13:33:01.923554 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:33:01.923566 | orchestrator | 2025-07-12 13:33:01.923599 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:33:01.923613 | orchestrator | 2025-07-12 13:33:01.923625 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:33:01.923638 | orchestrator | Saturday 12 July 2025 13:32:59 +0000 (0:00:00.117) 0:00:04.237 ********* 2025-07-12 13:33:01.923651 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:33:01.923663 | orchestrator | 2025-07-12 13:33:01.923674 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:33:01.923685 | orchestrator | Saturday 12 July 2025 13:32:59 +0000 (0:00:00.112) 0:00:04.350 ********* 2025-07-12 13:33:01.923695 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:33:01.923706 | orchestrator | 2025-07-12 13:33:01.923716 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:33:01.923727 | orchestrator | Saturday 12 July 2025 13:33:00 +0000 (0:00:00.686) 0:00:05.036 ********* 2025-07-12 13:33:01.923738 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:33:01.923748 | orchestrator | 2025-07-12 13:33:01.923759 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 13:33:01.923769 | orchestrator | 2025-07-12 13:33:01.923780 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 13:33:01.923791 | orchestrator | Saturday 12 July 2025 13:33:00 +0000 (0:00:00.143) 0:00:05.180 ********* 2025-07-12 13:33:01.923801 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:33:01.923812 | orchestrator | 2025-07-12 13:33:01.923822 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 13:33:01.923833 | orchestrator | Saturday 12 July 2025 13:33:00 +0000 (0:00:00.116) 0:00:05.297 ********* 2025-07-12 13:33:01.923844 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:33:01.923854 | orchestrator | 2025-07-12 13:33:01.923865 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 13:33:01.923876 | orchestrator | Saturday 12 July 2025 13:33:01 +0000 (0:00:00.672) 0:00:05.969 ********* 2025-07-12 13:33:01.923904 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:33:01.923916 | orchestrator | 2025-07-12 13:33:01.923927 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:33:01.923938 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:33:01.923950 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:33:01.923961 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:33:01.923972 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:33:01.923993 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:33:01.924004 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:33:01.924014 | orchestrator | 2025-07-12 13:33:01.924025 | orchestrator | 2025-07-12 13:33:01.924037 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:33:01.924048 | orchestrator | Saturday 12 July 2025 13:33:01 +0000 (0:00:00.045) 0:00:06.015 ********* 2025-07-12 13:33:01.924058 | orchestrator | =============================================================================== 2025-07-12 13:33:01.924069 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.34s 2025-07-12 13:33:01.924085 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.76s 2025-07-12 13:33:01.924096 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.69s 2025-07-12 13:33:02.210085 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-07-12 13:33:14.279268 | orchestrator | 2025-07-12 13:33:14 | INFO  | Task 9f4dfa81-0c71-491c-87ca-a29ef651fd48 (wait-for-connection) was prepared for execution. 2025-07-12 13:33:14.279385 | orchestrator | 2025-07-12 13:33:14 | INFO  | It takes a moment until task 9f4dfa81-0c71-491c-87ca-a29ef651fd48 (wait-for-connection) has been started and output is visible here. 2025-07-12 13:33:30.227496 | orchestrator | 2025-07-12 13:33:30.227667 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-07-12 13:33:30.227686 | orchestrator | 2025-07-12 13:33:30.227698 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-07-12 13:33:30.227710 | orchestrator | Saturday 12 July 2025 13:33:18 +0000 (0:00:00.241) 0:00:00.241 ********* 2025-07-12 13:33:30.227721 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:33:30.227738 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:33:30.227749 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:33:30.227760 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:33:30.227771 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:33:30.227782 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:33:30.227793 | orchestrator | 2025-07-12 13:33:30.227805 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:33:30.227816 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:30.227829 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:30.227840 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:30.227851 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:30.227862 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:30.227873 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:33:30.227884 | orchestrator | 2025-07-12 13:33:30.227896 | orchestrator | 2025-07-12 13:33:30.227907 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:33:30.227918 | orchestrator | Saturday 12 July 2025 13:33:29 +0000 (0:00:11.567) 0:00:11.808 ********* 2025-07-12 13:33:30.227929 | orchestrator | =============================================================================== 2025-07-12 13:33:30.227940 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2025-07-12 13:33:30.553079 | orchestrator | + osism apply hddtemp 2025-07-12 13:33:42.467086 | orchestrator | 2025-07-12 13:33:42 | INFO  | Task e3aa6247-9fb1-4096-8781-64ada5526387 (hddtemp) was prepared for execution. 2025-07-12 13:33:42.467203 | orchestrator | 2025-07-12 13:33:42 | INFO  | It takes a moment until task e3aa6247-9fb1-4096-8781-64ada5526387 (hddtemp) has been started and output is visible here. 2025-07-12 13:34:09.358441 | orchestrator | 2025-07-12 13:34:09.358574 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-07-12 13:34:09.358592 | orchestrator | 2025-07-12 13:34:09.358658 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-07-12 13:34:09.358671 | orchestrator | Saturday 12 July 2025 13:33:46 +0000 (0:00:00.277) 0:00:00.277 ********* 2025-07-12 13:34:09.358682 | orchestrator | ok: [testbed-manager] 2025-07-12 13:34:09.358694 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:34:09.358705 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:34:09.358716 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:34:09.358727 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:34:09.358737 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:34:09.358749 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:34:09.358760 | orchestrator | 2025-07-12 13:34:09.358771 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-07-12 13:34:09.358781 | orchestrator | Saturday 12 July 2025 13:33:47 +0000 (0:00:00.736) 0:00:01.014 ********* 2025-07-12 13:34:09.358793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:34:09.358807 | orchestrator | 2025-07-12 13:34:09.358818 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-07-12 13:34:09.358829 | orchestrator | Saturday 12 July 2025 13:33:48 +0000 (0:00:01.206) 0:00:02.220 ********* 2025-07-12 13:34:09.358840 | orchestrator | ok: [testbed-manager] 2025-07-12 13:34:09.358850 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:34:09.358861 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:34:09.358872 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:34:09.358886 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:34:09.358905 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:34:09.358922 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:34:09.358941 | orchestrator | 2025-07-12 13:34:09.358960 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-07-12 13:34:09.358982 | orchestrator | Saturday 12 July 2025 13:33:50 +0000 (0:00:01.983) 0:00:04.203 ********* 2025-07-12 13:34:09.359001 | orchestrator | changed: [testbed-manager] 2025-07-12 13:34:09.359018 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:34:09.359046 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:34:09.359060 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:34:09.359072 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:34:09.359085 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:34:09.359097 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:34:09.359109 | orchestrator | 2025-07-12 13:34:09.359122 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-07-12 13:34:09.359134 | orchestrator | Saturday 12 July 2025 13:33:51 +0000 (0:00:01.179) 0:00:05.383 ********* 2025-07-12 13:34:09.359147 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:34:09.359160 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:34:09.359172 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:34:09.359185 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:34:09.359197 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:34:09.359209 | orchestrator | ok: [testbed-manager] 2025-07-12 13:34:09.359226 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:34:09.359244 | orchestrator | 2025-07-12 13:34:09.359264 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-07-12 13:34:09.359283 | orchestrator | Saturday 12 July 2025 13:33:52 +0000 (0:00:01.150) 0:00:06.533 ********* 2025-07-12 13:34:09.359304 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:34:09.359353 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:34:09.359374 | orchestrator | changed: [testbed-manager] 2025-07-12 13:34:09.359392 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:34:09.359410 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:34:09.359429 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:34:09.359446 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:34:09.359465 | orchestrator | 2025-07-12 13:34:09.359483 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-07-12 13:34:09.359500 | orchestrator | Saturday 12 July 2025 13:33:53 +0000 (0:00:00.876) 0:00:07.410 ********* 2025-07-12 13:34:09.359518 | orchestrator | changed: [testbed-manager] 2025-07-12 13:34:09.359538 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:34:09.359556 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:34:09.359575 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:34:09.359618 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:34:09.359634 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:34:09.359645 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:34:09.359656 | orchestrator | 2025-07-12 13:34:09.359667 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-07-12 13:34:09.359677 | orchestrator | Saturday 12 July 2025 13:34:06 +0000 (0:00:12.446) 0:00:19.856 ********* 2025-07-12 13:34:09.359689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:34:09.359701 | orchestrator | 2025-07-12 13:34:09.359712 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-07-12 13:34:09.359722 | orchestrator | Saturday 12 July 2025 13:34:07 +0000 (0:00:01.191) 0:00:21.048 ********* 2025-07-12 13:34:09.359733 | orchestrator | changed: [testbed-manager] 2025-07-12 13:34:09.359744 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:34:09.359755 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:34:09.359766 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:34:09.359785 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:34:09.359803 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:34:09.359820 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:34:09.359838 | orchestrator | 2025-07-12 13:34:09.359857 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:34:09.359876 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:34:09.359923 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:34:09.359945 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:34:09.359965 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:34:09.359984 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:34:09.359998 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:34:09.360009 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:34:09.360019 | orchestrator | 2025-07-12 13:34:09.360030 | orchestrator | 2025-07-12 13:34:09.360041 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:34:09.360051 | orchestrator | Saturday 12 July 2025 13:34:09 +0000 (0:00:01.760) 0:00:22.808 ********* 2025-07-12 13:34:09.360074 | orchestrator | =============================================================================== 2025-07-12 13:34:09.360085 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.45s 2025-07-12 13:34:09.360096 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.98s 2025-07-12 13:34:09.360107 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.76s 2025-07-12 13:34:09.360117 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-07-12 13:34:09.360128 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.19s 2025-07-12 13:34:09.360147 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-07-12 13:34:09.360158 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.15s 2025-07-12 13:34:09.360168 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.88s 2025-07-12 13:34:09.360179 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.74s 2025-07-12 13:34:09.546180 | orchestrator | ++ semver latest 7.1.1 2025-07-12 13:34:09.583216 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-12 13:34:09.583289 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 13:34:09.583305 | orchestrator | + sudo systemctl restart manager.service 2025-07-12 13:34:22.941730 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 13:34:22.941843 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 13:34:22.941860 | orchestrator | + local max_attempts=60 2025-07-12 13:34:22.941875 | orchestrator | + local name=ceph-ansible 2025-07-12 13:34:22.941886 | orchestrator | + local attempt_num=1 2025-07-12 13:34:22.941898 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:22.979379 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:22.979448 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:22.979460 | orchestrator | + sleep 5 2025-07-12 13:34:27.988115 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:28.025263 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:28.025356 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:28.025371 | orchestrator | + sleep 5 2025-07-12 13:34:33.031119 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:33.074465 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:33.074553 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:33.074568 | orchestrator | + sleep 5 2025-07-12 13:34:38.078982 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:38.112597 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:38.112719 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:38.112733 | orchestrator | + sleep 5 2025-07-12 13:34:43.117923 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:43.154591 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:43.154706 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:43.154720 | orchestrator | + sleep 5 2025-07-12 13:34:48.160142 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:48.203042 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:48.203158 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:48.203185 | orchestrator | + sleep 5 2025-07-12 13:34:53.210369 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:53.256450 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:53.256541 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:53.256556 | orchestrator | + sleep 5 2025-07-12 13:34:58.263045 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:34:58.304049 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:34:58.304154 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:34:58.304170 | orchestrator | + sleep 5 2025-07-12 13:35:03.310208 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:35:03.351967 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:03.352050 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:35:03.352066 | orchestrator | + sleep 5 2025-07-12 13:35:08.356203 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:35:08.399403 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:08.399483 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:35:08.399497 | orchestrator | + sleep 5 2025-07-12 13:35:13.404499 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:35:13.444700 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:13.444814 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:35:13.444831 | orchestrator | + sleep 5 2025-07-12 13:35:18.450511 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:35:18.491872 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:18.491959 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:35:18.491973 | orchestrator | + sleep 5 2025-07-12 13:35:23.497357 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:35:23.544545 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:23.544675 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 13:35:23.544692 | orchestrator | + sleep 5 2025-07-12 13:35:28.549942 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 13:35:28.589398 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:28.589499 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 13:35:28.589516 | orchestrator | + local max_attempts=60 2025-07-12 13:35:28.589528 | orchestrator | + local name=kolla-ansible 2025-07-12 13:35:28.589541 | orchestrator | + local attempt_num=1 2025-07-12 13:35:28.590234 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 13:35:28.630263 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:28.630334 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 13:35:28.630348 | orchestrator | + local max_attempts=60 2025-07-12 13:35:28.630361 | orchestrator | + local name=osism-ansible 2025-07-12 13:35:28.630373 | orchestrator | + local attempt_num=1 2025-07-12 13:35:28.631295 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 13:35:28.671786 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 13:35:28.671848 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 13:35:28.671862 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 13:35:28.848573 | orchestrator | ARA in ceph-ansible already disabled. 2025-07-12 13:35:29.011464 | orchestrator | ARA in kolla-ansible already disabled. 2025-07-12 13:35:29.159764 | orchestrator | ARA in osism-ansible already disabled. 2025-07-12 13:35:29.311122 | orchestrator | ARA in osism-kubernetes already disabled. 2025-07-12 13:35:29.312500 | orchestrator | + osism apply gather-facts 2025-07-12 13:35:41.275995 | orchestrator | 2025-07-12 13:35:41 | INFO  | Task 9dd3611e-3d82-41db-b9c5-638b342546be (gather-facts) was prepared for execution. 2025-07-12 13:35:41.276075 | orchestrator | 2025-07-12 13:35:41 | INFO  | It takes a moment until task 9dd3611e-3d82-41db-b9c5-638b342546be (gather-facts) has been started and output is visible here. 2025-07-12 13:35:54.670087 | orchestrator | 2025-07-12 13:35:54.670207 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:35:54.670234 | orchestrator | 2025-07-12 13:35:54.670247 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:35:54.670259 | orchestrator | Saturday 12 July 2025 13:35:45 +0000 (0:00:00.226) 0:00:00.226 ********* 2025-07-12 13:35:54.670271 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:35:54.670283 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:35:54.670294 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:35:54.670321 | orchestrator | ok: [testbed-manager] 2025-07-12 13:35:54.670332 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:35:54.670343 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:35:54.670354 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:35:54.670364 | orchestrator | 2025-07-12 13:35:54.670376 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 13:35:54.670387 | orchestrator | 2025-07-12 13:35:54.670398 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 13:35:54.670409 | orchestrator | Saturday 12 July 2025 13:35:53 +0000 (0:00:08.489) 0:00:08.716 ********* 2025-07-12 13:35:54.670443 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:35:54.670455 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:35:54.670466 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:35:54.670477 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:35:54.670487 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:35:54.670498 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:35:54.670509 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:35:54.670519 | orchestrator | 2025-07-12 13:35:54.670530 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:35:54.670541 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:54.670553 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:54.670565 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:54.670578 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:54.670590 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:54.670602 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:54.670615 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 13:35:54.670656 | orchestrator | 2025-07-12 13:35:54.670670 | orchestrator | 2025-07-12 13:35:54.670683 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:35:54.670696 | orchestrator | Saturday 12 July 2025 13:35:54 +0000 (0:00:00.526) 0:00:09.243 ********* 2025-07-12 13:35:54.670708 | orchestrator | =============================================================================== 2025-07-12 13:35:54.670721 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.49s 2025-07-12 13:35:54.670733 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-07-12 13:35:54.939610 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-07-12 13:35:54.949812 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-07-12 13:35:54.961356 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-07-12 13:35:54.976973 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-07-12 13:35:54.988164 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-07-12 13:35:54.999788 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-07-12 13:35:55.022126 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-07-12 13:35:55.036777 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-07-12 13:35:55.055311 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-07-12 13:35:55.074922 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-07-12 13:35:55.088789 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-07-12 13:35:55.100128 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-07-12 13:35:55.111084 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-07-12 13:35:55.121726 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-07-12 13:35:55.132265 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-07-12 13:35:55.144936 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-07-12 13:35:55.155781 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-07-12 13:35:55.166185 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-07-12 13:35:55.178705 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-07-12 13:35:55.188722 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-07-12 13:35:55.201784 | orchestrator | + [[ false == \t\r\u\e ]] 2025-07-12 13:35:55.408281 | orchestrator | ok: Runtime: 0:22:49.076200 2025-07-12 13:35:55.501010 | 2025-07-12 13:35:55.501123 | TASK [Deploy services] 2025-07-12 13:35:56.032626 | orchestrator | skipping: Conditional result was False 2025-07-12 13:35:56.049782 | 2025-07-12 13:35:56.049923 | TASK [Deploy in a nutshell] 2025-07-12 13:35:56.754671 | orchestrator | 2025-07-12 13:35:56.754859 | orchestrator | # PULL IMAGES 2025-07-12 13:35:56.754883 | orchestrator | 2025-07-12 13:35:56.754897 | orchestrator | + set -e 2025-07-12 13:35:56.754916 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 13:35:56.754936 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 13:35:56.754951 | orchestrator | ++ INTERACTIVE=false 2025-07-12 13:35:56.754997 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 13:35:56.755020 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 13:35:56.755035 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 13:35:56.755047 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 13:35:56.755065 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 13:35:56.755077 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 13:35:56.755095 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 13:35:56.755106 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 13:35:56.755125 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 13:35:56.755137 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 13:35:56.755151 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 13:35:56.755163 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 13:35:56.755175 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 13:35:56.755187 | orchestrator | ++ export ARA=false 2025-07-12 13:35:56.755197 | orchestrator | ++ ARA=false 2025-07-12 13:35:56.755208 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 13:35:56.755220 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 13:35:56.755231 | orchestrator | ++ export TEMPEST=false 2025-07-12 13:35:56.755242 | orchestrator | ++ TEMPEST=false 2025-07-12 13:35:56.755253 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 13:35:56.755264 | orchestrator | ++ IS_ZUUL=true 2025-07-12 13:35:56.755275 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 13:35:56.755286 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 13:35:56.755297 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 13:35:56.755307 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 13:35:56.755318 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 13:35:56.755330 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 13:35:56.755341 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 13:35:56.755352 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 13:35:56.755364 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 13:35:56.755381 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 13:35:56.755393 | orchestrator | + echo 2025-07-12 13:35:56.755404 | orchestrator | + echo '# PULL IMAGES' 2025-07-12 13:35:56.755415 | orchestrator | + echo 2025-07-12 13:35:56.755441 | orchestrator | ++ semver latest 7.0.0 2025-07-12 13:35:56.812867 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-12 13:35:56.812938 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 13:35:56.812955 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-07-12 13:35:58.586389 | orchestrator | 2025-07-12 13:35:58 | INFO  | Trying to run play pull-images in environment custom 2025-07-12 13:36:08.703758 | orchestrator | 2025-07-12 13:36:08 | INFO  | Task e6811608-22f9-48dc-a2bc-2127f00c1719 (pull-images) was prepared for execution. 2025-07-12 13:36:08.703942 | orchestrator | 2025-07-12 13:36:08 | INFO  | It takes a moment until task e6811608-22f9-48dc-a2bc-2127f00c1719 (pull-images) has been started and output is visible here. 2025-07-12 13:38:20.718426 | orchestrator | 2025-07-12 13:38:20.718554 | orchestrator | PLAY [Pull images] ************************************************************* 2025-07-12 13:38:20.718573 | orchestrator | 2025-07-12 13:38:20.718587 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-07-12 13:38:20.718608 | orchestrator | Saturday 12 July 2025 13:36:12 +0000 (0:00:00.164) 0:00:00.164 ********* 2025-07-12 13:38:20.718620 | orchestrator | changed: [testbed-manager] 2025-07-12 13:38:20.718632 | orchestrator | 2025-07-12 13:38:20.718644 | orchestrator | TASK [Pull other images] ******************************************************* 2025-07-12 13:38:20.718655 | orchestrator | Saturday 12 July 2025 13:37:22 +0000 (0:01:09.684) 0:01:09.849 ********* 2025-07-12 13:38:20.718668 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-07-12 13:38:20.718734 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-07-12 13:38:20.718747 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-07-12 13:38:20.718758 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-07-12 13:38:20.718769 | orchestrator | changed: [testbed-manager] => (item=common) 2025-07-12 13:38:20.718781 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-07-12 13:38:20.718825 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-07-12 13:38:20.718840 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-07-12 13:38:20.718852 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-07-12 13:38:20.718863 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-07-12 13:38:20.718873 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-07-12 13:38:20.718885 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-07-12 13:38:20.718896 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-07-12 13:38:20.718907 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-07-12 13:38:20.718918 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-07-12 13:38:20.718929 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-07-12 13:38:20.718940 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-07-12 13:38:20.718951 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-07-12 13:38:20.718962 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-07-12 13:38:20.718973 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-07-12 13:38:20.718984 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-07-12 13:38:20.718995 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-07-12 13:38:20.719006 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-07-12 13:38:20.719017 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-07-12 13:38:20.719028 | orchestrator | 2025-07-12 13:38:20.719039 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:38:20.719051 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:38:20.719064 | orchestrator | 2025-07-12 13:38:20.719075 | orchestrator | 2025-07-12 13:38:20.719086 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:38:20.719097 | orchestrator | Saturday 12 July 2025 13:38:20 +0000 (0:00:58.143) 0:02:07.993 ********* 2025-07-12 13:38:20.719108 | orchestrator | =============================================================================== 2025-07-12 13:38:20.719119 | orchestrator | Pull keystone image ---------------------------------------------------- 69.68s 2025-07-12 13:38:20.719130 | orchestrator | Pull other images ------------------------------------------------------ 58.14s 2025-07-12 13:38:22.958206 | orchestrator | 2025-07-12 13:38:22 | INFO  | Trying to run play wipe-partitions in environment custom 2025-07-12 13:38:33.140051 | orchestrator | 2025-07-12 13:38:33 | INFO  | Task 09f0b0bc-e487-474a-827f-1d201e1c0027 (wipe-partitions) was prepared for execution. 2025-07-12 13:38:33.140169 | orchestrator | 2025-07-12 13:38:33 | INFO  | It takes a moment until task 09f0b0bc-e487-474a-827f-1d201e1c0027 (wipe-partitions) has been started and output is visible here. 2025-07-12 13:38:45.573240 | orchestrator | 2025-07-12 13:38:45.573361 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-07-12 13:38:45.573378 | orchestrator | 2025-07-12 13:38:45.573390 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-07-12 13:38:45.573412 | orchestrator | Saturday 12 July 2025 13:38:37 +0000 (0:00:00.137) 0:00:00.137 ********* 2025-07-12 13:38:45.573424 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:38:45.573436 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:38:45.573447 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:38:45.573458 | orchestrator | 2025-07-12 13:38:45.573469 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-07-12 13:38:45.573480 | orchestrator | Saturday 12 July 2025 13:38:37 +0000 (0:00:00.570) 0:00:00.707 ********* 2025-07-12 13:38:45.573491 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:38:45.573502 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:38:45.573513 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:38:45.573546 | orchestrator | 2025-07-12 13:38:45.573558 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-07-12 13:38:45.573569 | orchestrator | Saturday 12 July 2025 13:38:38 +0000 (0:00:00.303) 0:00:01.010 ********* 2025-07-12 13:38:45.573580 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:38:45.573592 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:38:45.573603 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:38:45.573614 | orchestrator | 2025-07-12 13:38:45.573625 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-07-12 13:38:45.573636 | orchestrator | Saturday 12 July 2025 13:38:38 +0000 (0:00:00.825) 0:00:01.836 ********* 2025-07-12 13:38:45.573646 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:38:45.573657 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:38:45.573668 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:38:45.573678 | orchestrator | 2025-07-12 13:38:45.573728 | orchestrator | TASK [Check device availability] *********************************************** 2025-07-12 13:38:45.573740 | orchestrator | Saturday 12 July 2025 13:38:39 +0000 (0:00:00.245) 0:00:02.082 ********* 2025-07-12 13:38:45.573751 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 13:38:45.573764 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 13:38:45.573776 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 13:38:45.573789 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 13:38:45.573806 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 13:38:45.573818 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 13:38:45.573831 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 13:38:45.573843 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 13:38:45.573855 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 13:38:45.573867 | orchestrator | 2025-07-12 13:38:45.573880 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-07-12 13:38:45.573892 | orchestrator | Saturday 12 July 2025 13:38:40 +0000 (0:00:01.202) 0:00:03.284 ********* 2025-07-12 13:38:45.573904 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 13:38:45.573916 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 13:38:45.573928 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 13:38:45.573940 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 13:38:45.573952 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 13:38:45.573964 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 13:38:45.573976 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 13:38:45.573988 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 13:38:45.574000 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 13:38:45.574012 | orchestrator | 2025-07-12 13:38:45.574115 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-07-12 13:38:45.574128 | orchestrator | Saturday 12 July 2025 13:38:41 +0000 (0:00:01.397) 0:00:04.682 ********* 2025-07-12 13:38:45.574141 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 13:38:45.574152 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 13:38:45.574163 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 13:38:45.574174 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 13:38:45.574185 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 13:38:45.574196 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 13:38:45.574207 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 13:38:45.574217 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 13:38:45.574228 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 13:38:45.574239 | orchestrator | 2025-07-12 13:38:45.574250 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-07-12 13:38:45.574262 | orchestrator | Saturday 12 July 2025 13:38:43 +0000 (0:00:02.203) 0:00:06.886 ********* 2025-07-12 13:38:45.574283 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:38:45.574294 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:38:45.574305 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:38:45.574316 | orchestrator | 2025-07-12 13:38:45.574327 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-07-12 13:38:45.574338 | orchestrator | Saturday 12 July 2025 13:38:44 +0000 (0:00:00.591) 0:00:07.478 ********* 2025-07-12 13:38:45.574348 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:38:45.574359 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:38:45.574370 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:38:45.574380 | orchestrator | 2025-07-12 13:38:45.574391 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:38:45.574403 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:45.574415 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:45.574446 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:38:45.574458 | orchestrator | 2025-07-12 13:38:45.574469 | orchestrator | 2025-07-12 13:38:45.574487 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:38:45.574498 | orchestrator | Saturday 12 July 2025 13:38:45 +0000 (0:00:00.694) 0:00:08.172 ********* 2025-07-12 13:38:45.574509 | orchestrator | =============================================================================== 2025-07-12 13:38:45.574520 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.20s 2025-07-12 13:38:45.574531 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.40s 2025-07-12 13:38:45.574542 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2025-07-12 13:38:45.574553 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.83s 2025-07-12 13:38:45.574564 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2025-07-12 13:38:45.574575 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-07-12 13:38:45.574586 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-07-12 13:38:45.574597 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2025-07-12 13:38:45.574608 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-07-12 13:38:58.345326 | orchestrator | 2025-07-12 13:38:58 | INFO  | Task 8d95bda4-b33e-4728-8ab3-d8356779ee10 (facts) was prepared for execution. 2025-07-12 13:38:58.345425 | orchestrator | 2025-07-12 13:38:58 | INFO  | It takes a moment until task 8d95bda4-b33e-4728-8ab3-d8356779ee10 (facts) has been started and output is visible here. 2025-07-12 13:39:10.826625 | orchestrator | 2025-07-12 13:39:10.826786 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 13:39:10.826805 | orchestrator | 2025-07-12 13:39:10.826818 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 13:39:10.826830 | orchestrator | Saturday 12 July 2025 13:39:02 +0000 (0:00:00.274) 0:00:00.274 ********* 2025-07-12 13:39:10.826841 | orchestrator | ok: [testbed-manager] 2025-07-12 13:39:10.826854 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:39:10.826865 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:39:10.826876 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:39:10.826887 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:10.826898 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:10.826909 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:39:10.826920 | orchestrator | 2025-07-12 13:39:10.826932 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 13:39:10.826971 | orchestrator | Saturday 12 July 2025 13:39:03 +0000 (0:00:01.211) 0:00:01.485 ********* 2025-07-12 13:39:10.826983 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:39:10.826995 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:39:10.827006 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:39:10.827017 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:39:10.827029 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:10.827040 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:10.827051 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:10.827062 | orchestrator | 2025-07-12 13:39:10.827073 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:39:10.827084 | orchestrator | 2025-07-12 13:39:10.827096 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:39:10.827106 | orchestrator | Saturday 12 July 2025 13:39:04 +0000 (0:00:01.265) 0:00:02.751 ********* 2025-07-12 13:39:10.827118 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:39:10.827129 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:39:10.827140 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:39:10.827155 | orchestrator | ok: [testbed-manager] 2025-07-12 13:39:10.827167 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:39:10.827180 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:10.827192 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:10.827204 | orchestrator | 2025-07-12 13:39:10.827217 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 13:39:10.827229 | orchestrator | 2025-07-12 13:39:10.827241 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 13:39:10.827254 | orchestrator | Saturday 12 July 2025 13:39:09 +0000 (0:00:04.961) 0:00:07.712 ********* 2025-07-12 13:39:10.827267 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:39:10.827279 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:39:10.827291 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:39:10.827303 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:39:10.827315 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:10.827328 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:10.827340 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:10.827353 | orchestrator | 2025-07-12 13:39:10.827365 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:39:10.827378 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:39:10.827392 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:39:10.827405 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:39:10.827417 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:39:10.827444 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:39:10.827458 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:39:10.827470 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:39:10.827482 | orchestrator | 2025-07-12 13:39:10.827495 | orchestrator | 2025-07-12 13:39:10.827506 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:39:10.827517 | orchestrator | Saturday 12 July 2025 13:39:10 +0000 (0:00:00.563) 0:00:08.276 ********* 2025-07-12 13:39:10.827528 | orchestrator | =============================================================================== 2025-07-12 13:39:10.827539 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.96s 2025-07-12 13:39:10.827559 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-07-12 13:39:10.827571 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.21s 2025-07-12 13:39:10.827582 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-07-12 13:39:13.792273 | orchestrator | 2025-07-12 13:39:13 | INFO  | Task 38883798-5674-4a0a-8d51-2f46790c527e (ceph-configure-lvm-volumes) was prepared for execution. 2025-07-12 13:39:13.792541 | orchestrator | 2025-07-12 13:39:13 | INFO  | It takes a moment until task 38883798-5674-4a0a-8d51-2f46790c527e (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-07-12 13:39:25.962355 | orchestrator | 2025-07-12 13:39:25.962473 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 13:39:25.962490 | orchestrator | 2025-07-12 13:39:25.962503 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:39:25.962516 | orchestrator | Saturday 12 July 2025 13:39:18 +0000 (0:00:00.348) 0:00:00.348 ********* 2025-07-12 13:39:25.962528 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:25.962540 | orchestrator | 2025-07-12 13:39:25.962554 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:39:25.962565 | orchestrator | Saturday 12 July 2025 13:39:18 +0000 (0:00:00.245) 0:00:00.593 ********* 2025-07-12 13:39:25.962576 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:25.962588 | orchestrator | 2025-07-12 13:39:25.962599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.962610 | orchestrator | Saturday 12 July 2025 13:39:18 +0000 (0:00:00.216) 0:00:00.810 ********* 2025-07-12 13:39:25.962621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 13:39:25.962633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 13:39:25.962644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 13:39:25.962655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 13:39:25.962666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 13:39:25.962677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 13:39:25.962688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 13:39:25.962743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 13:39:25.962757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 13:39:25.962768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 13:39:25.962779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 13:39:25.962790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 13:39:25.962801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 13:39:25.962812 | orchestrator | 2025-07-12 13:39:25.962824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.962835 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.364) 0:00:01.174 ********* 2025-07-12 13:39:25.962846 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.962858 | orchestrator | 2025-07-12 13:39:25.962869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.962891 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.473) 0:00:01.648 ********* 2025-07-12 13:39:25.962903 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.962914 | orchestrator | 2025-07-12 13:39:25.962925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.962956 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.205) 0:00:01.854 ********* 2025-07-12 13:39:25.962968 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.962979 | orchestrator | 2025-07-12 13:39:25.962990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963000 | orchestrator | Saturday 12 July 2025 13:39:19 +0000 (0:00:00.204) 0:00:02.058 ********* 2025-07-12 13:39:25.963012 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963023 | orchestrator | 2025-07-12 13:39:25.963034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963045 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.183) 0:00:02.241 ********* 2025-07-12 13:39:25.963056 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963066 | orchestrator | 2025-07-12 13:39:25.963078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963089 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.198) 0:00:02.440 ********* 2025-07-12 13:39:25.963099 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963110 | orchestrator | 2025-07-12 13:39:25.963121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963132 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.197) 0:00:02.637 ********* 2025-07-12 13:39:25.963143 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963154 | orchestrator | 2025-07-12 13:39:25.963165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963176 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.217) 0:00:02.855 ********* 2025-07-12 13:39:25.963187 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963199 | orchestrator | 2025-07-12 13:39:25.963210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963221 | orchestrator | Saturday 12 July 2025 13:39:20 +0000 (0:00:00.205) 0:00:03.061 ********* 2025-07-12 13:39:25.963232 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49) 2025-07-12 13:39:25.963244 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49) 2025-07-12 13:39:25.963255 | orchestrator | 2025-07-12 13:39:25.963266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963277 | orchestrator | Saturday 12 July 2025 13:39:21 +0000 (0:00:00.435) 0:00:03.497 ********* 2025-07-12 13:39:25.963306 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830) 2025-07-12 13:39:25.963318 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830) 2025-07-12 13:39:25.963329 | orchestrator | 2025-07-12 13:39:25.963340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963351 | orchestrator | Saturday 12 July 2025 13:39:21 +0000 (0:00:00.408) 0:00:03.906 ********* 2025-07-12 13:39:25.963362 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767) 2025-07-12 13:39:25.963373 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767) 2025-07-12 13:39:25.963385 | orchestrator | 2025-07-12 13:39:25.963396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963407 | orchestrator | Saturday 12 July 2025 13:39:22 +0000 (0:00:00.621) 0:00:04.527 ********* 2025-07-12 13:39:25.963417 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac) 2025-07-12 13:39:25.963428 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac) 2025-07-12 13:39:25.963440 | orchestrator | 2025-07-12 13:39:25.963451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:25.963462 | orchestrator | Saturday 12 July 2025 13:39:23 +0000 (0:00:00.645) 0:00:05.173 ********* 2025-07-12 13:39:25.963479 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:39:25.963490 | orchestrator | 2025-07-12 13:39:25.963501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:25.963512 | orchestrator | Saturday 12 July 2025 13:39:23 +0000 (0:00:00.763) 0:00:05.936 ********* 2025-07-12 13:39:25.963523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 13:39:25.963534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 13:39:25.963545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 13:39:25.963556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 13:39:25.963567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 13:39:25.963578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 13:39:25.963589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 13:39:25.963600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 13:39:25.963611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 13:39:25.963622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 13:39:25.963633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 13:39:25.963643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 13:39:25.963659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 13:39:25.963671 | orchestrator | 2025-07-12 13:39:25.963681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:25.963693 | orchestrator | Saturday 12 July 2025 13:39:24 +0000 (0:00:00.393) 0:00:06.329 ********* 2025-07-12 13:39:25.963733 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963745 | orchestrator | 2025-07-12 13:39:25.963756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:25.963767 | orchestrator | Saturday 12 July 2025 13:39:24 +0000 (0:00:00.220) 0:00:06.549 ********* 2025-07-12 13:39:25.963778 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963789 | orchestrator | 2025-07-12 13:39:25.963799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:25.963810 | orchestrator | Saturday 12 July 2025 13:39:24 +0000 (0:00:00.209) 0:00:06.759 ********* 2025-07-12 13:39:25.963821 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963832 | orchestrator | 2025-07-12 13:39:25.963842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:25.963853 | orchestrator | Saturday 12 July 2025 13:39:24 +0000 (0:00:00.242) 0:00:07.002 ********* 2025-07-12 13:39:25.963864 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963874 | orchestrator | 2025-07-12 13:39:25.963885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:25.963896 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.211) 0:00:07.214 ********* 2025-07-12 13:39:25.963907 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963917 | orchestrator | 2025-07-12 13:39:25.963928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:25.963939 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.229) 0:00:07.443 ********* 2025-07-12 13:39:25.963950 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.963960 | orchestrator | 2025-07-12 13:39:25.963971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:25.963989 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.187) 0:00:07.631 ********* 2025-07-12 13:39:25.964000 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:25.964011 | orchestrator | 2025-07-12 13:39:25.964022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:25.964033 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.196) 0:00:07.827 ********* 2025-07-12 13:39:25.964051 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.623888 | orchestrator | 2025-07-12 13:39:33.623995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:33.624011 | orchestrator | Saturday 12 July 2025 13:39:25 +0000 (0:00:00.210) 0:00:08.038 ********* 2025-07-12 13:39:33.624023 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 13:39:33.624035 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-12 13:39:33.624046 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-12 13:39:33.624057 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-12 13:39:33.624068 | orchestrator | 2025-07-12 13:39:33.624080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:33.624091 | orchestrator | Saturday 12 July 2025 13:39:26 +0000 (0:00:01.014) 0:00:09.052 ********* 2025-07-12 13:39:33.624102 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624114 | orchestrator | 2025-07-12 13:39:33.624124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:33.624135 | orchestrator | Saturday 12 July 2025 13:39:27 +0000 (0:00:00.190) 0:00:09.243 ********* 2025-07-12 13:39:33.624146 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624157 | orchestrator | 2025-07-12 13:39:33.624167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:33.624178 | orchestrator | Saturday 12 July 2025 13:39:27 +0000 (0:00:00.253) 0:00:09.497 ********* 2025-07-12 13:39:33.624189 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624200 | orchestrator | 2025-07-12 13:39:33.624230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:33.624241 | orchestrator | Saturday 12 July 2025 13:39:27 +0000 (0:00:00.200) 0:00:09.698 ********* 2025-07-12 13:39:33.624252 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624263 | orchestrator | 2025-07-12 13:39:33.624274 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 13:39:33.624285 | orchestrator | Saturday 12 July 2025 13:39:27 +0000 (0:00:00.201) 0:00:09.899 ********* 2025-07-12 13:39:33.624295 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-07-12 13:39:33.624306 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-07-12 13:39:33.624317 | orchestrator | 2025-07-12 13:39:33.624328 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 13:39:33.624339 | orchestrator | Saturday 12 July 2025 13:39:27 +0000 (0:00:00.163) 0:00:10.063 ********* 2025-07-12 13:39:33.624350 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624361 | orchestrator | 2025-07-12 13:39:33.624371 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 13:39:33.624382 | orchestrator | Saturday 12 July 2025 13:39:28 +0000 (0:00:00.130) 0:00:10.194 ********* 2025-07-12 13:39:33.624393 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624404 | orchestrator | 2025-07-12 13:39:33.624415 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 13:39:33.624426 | orchestrator | Saturday 12 July 2025 13:39:28 +0000 (0:00:00.131) 0:00:10.325 ********* 2025-07-12 13:39:33.624437 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624448 | orchestrator | 2025-07-12 13:39:33.624458 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 13:39:33.624470 | orchestrator | Saturday 12 July 2025 13:39:28 +0000 (0:00:00.123) 0:00:10.449 ********* 2025-07-12 13:39:33.624481 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:33.624492 | orchestrator | 2025-07-12 13:39:33.624503 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 13:39:33.624537 | orchestrator | Saturday 12 July 2025 13:39:28 +0000 (0:00:00.139) 0:00:10.588 ********* 2025-07-12 13:39:33.624550 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09698b4c-8482-58a0-ad33-d3500ef3a9f7'}}) 2025-07-12 13:39:33.624561 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f35471dc-23d0-5222-b540-93882fae0f69'}}) 2025-07-12 13:39:33.624571 | orchestrator | 2025-07-12 13:39:33.624583 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 13:39:33.624593 | orchestrator | Saturday 12 July 2025 13:39:28 +0000 (0:00:00.173) 0:00:10.762 ********* 2025-07-12 13:39:33.624605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09698b4c-8482-58a0-ad33-d3500ef3a9f7'}})  2025-07-12 13:39:33.624623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f35471dc-23d0-5222-b540-93882fae0f69'}})  2025-07-12 13:39:33.624634 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624645 | orchestrator | 2025-07-12 13:39:33.624655 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 13:39:33.624666 | orchestrator | Saturday 12 July 2025 13:39:28 +0000 (0:00:00.167) 0:00:10.929 ********* 2025-07-12 13:39:33.624677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09698b4c-8482-58a0-ad33-d3500ef3a9f7'}})  2025-07-12 13:39:33.624688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f35471dc-23d0-5222-b540-93882fae0f69'}})  2025-07-12 13:39:33.624699 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624745 | orchestrator | 2025-07-12 13:39:33.624757 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 13:39:33.624767 | orchestrator | Saturday 12 July 2025 13:39:28 +0000 (0:00:00.156) 0:00:11.085 ********* 2025-07-12 13:39:33.624778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09698b4c-8482-58a0-ad33-d3500ef3a9f7'}})  2025-07-12 13:39:33.624789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f35471dc-23d0-5222-b540-93882fae0f69'}})  2025-07-12 13:39:33.624800 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624811 | orchestrator | 2025-07-12 13:39:33.624839 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 13:39:33.624851 | orchestrator | Saturday 12 July 2025 13:39:29 +0000 (0:00:00.370) 0:00:11.456 ********* 2025-07-12 13:39:33.624862 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:33.624873 | orchestrator | 2025-07-12 13:39:33.624884 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 13:39:33.624895 | orchestrator | Saturday 12 July 2025 13:39:29 +0000 (0:00:00.152) 0:00:11.608 ********* 2025-07-12 13:39:33.624905 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:39:33.624916 | orchestrator | 2025-07-12 13:39:33.624927 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 13:39:33.624938 | orchestrator | Saturday 12 July 2025 13:39:29 +0000 (0:00:00.156) 0:00:11.765 ********* 2025-07-12 13:39:33.624949 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.624960 | orchestrator | 2025-07-12 13:39:33.624971 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 13:39:33.624982 | orchestrator | Saturday 12 July 2025 13:39:29 +0000 (0:00:00.140) 0:00:11.906 ********* 2025-07-12 13:39:33.624993 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.625004 | orchestrator | 2025-07-12 13:39:33.625015 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 13:39:33.625026 | orchestrator | Saturday 12 July 2025 13:39:29 +0000 (0:00:00.136) 0:00:12.042 ********* 2025-07-12 13:39:33.625036 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.625047 | orchestrator | 2025-07-12 13:39:33.625058 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 13:39:33.625077 | orchestrator | Saturday 12 July 2025 13:39:30 +0000 (0:00:00.133) 0:00:12.176 ********* 2025-07-12 13:39:33.625088 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:39:33.625099 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:33.625110 | orchestrator |  "sdb": { 2025-07-12 13:39:33.625121 | orchestrator |  "osd_lvm_uuid": "09698b4c-8482-58a0-ad33-d3500ef3a9f7" 2025-07-12 13:39:33.625133 | orchestrator |  }, 2025-07-12 13:39:33.625144 | orchestrator |  "sdc": { 2025-07-12 13:39:33.625155 | orchestrator |  "osd_lvm_uuid": "f35471dc-23d0-5222-b540-93882fae0f69" 2025-07-12 13:39:33.625166 | orchestrator |  } 2025-07-12 13:39:33.625177 | orchestrator |  } 2025-07-12 13:39:33.625187 | orchestrator | } 2025-07-12 13:39:33.625198 | orchestrator | 2025-07-12 13:39:33.625209 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 13:39:33.625220 | orchestrator | Saturday 12 July 2025 13:39:30 +0000 (0:00:00.177) 0:00:12.354 ********* 2025-07-12 13:39:33.625231 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.625242 | orchestrator | 2025-07-12 13:39:33.625253 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 13:39:33.625264 | orchestrator | Saturday 12 July 2025 13:39:30 +0000 (0:00:00.135) 0:00:12.489 ********* 2025-07-12 13:39:33.625275 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.625286 | orchestrator | 2025-07-12 13:39:33.625297 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 13:39:33.625308 | orchestrator | Saturday 12 July 2025 13:39:30 +0000 (0:00:00.136) 0:00:12.626 ********* 2025-07-12 13:39:33.625318 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:39:33.625329 | orchestrator | 2025-07-12 13:39:33.625340 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 13:39:33.625351 | orchestrator | Saturday 12 July 2025 13:39:30 +0000 (0:00:00.134) 0:00:12.760 ********* 2025-07-12 13:39:33.625362 | orchestrator | changed: [testbed-node-3] => { 2025-07-12 13:39:33.625373 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 13:39:33.625383 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:33.625394 | orchestrator |  "sdb": { 2025-07-12 13:39:33.625405 | orchestrator |  "osd_lvm_uuid": "09698b4c-8482-58a0-ad33-d3500ef3a9f7" 2025-07-12 13:39:33.625416 | orchestrator |  }, 2025-07-12 13:39:33.625427 | orchestrator |  "sdc": { 2025-07-12 13:39:33.625438 | orchestrator |  "osd_lvm_uuid": "f35471dc-23d0-5222-b540-93882fae0f69" 2025-07-12 13:39:33.625448 | orchestrator |  } 2025-07-12 13:39:33.625464 | orchestrator |  }, 2025-07-12 13:39:33.625475 | orchestrator |  "lvm_volumes": [ 2025-07-12 13:39:33.625486 | orchestrator |  { 2025-07-12 13:39:33.625497 | orchestrator |  "data": "osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7", 2025-07-12 13:39:33.625508 | orchestrator |  "data_vg": "ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7" 2025-07-12 13:39:33.625518 | orchestrator |  }, 2025-07-12 13:39:33.625529 | orchestrator |  { 2025-07-12 13:39:33.625540 | orchestrator |  "data": "osd-block-f35471dc-23d0-5222-b540-93882fae0f69", 2025-07-12 13:39:33.625551 | orchestrator |  "data_vg": "ceph-f35471dc-23d0-5222-b540-93882fae0f69" 2025-07-12 13:39:33.625561 | orchestrator |  } 2025-07-12 13:39:33.625572 | orchestrator |  ] 2025-07-12 13:39:33.625583 | orchestrator |  } 2025-07-12 13:39:33.625600 | orchestrator | } 2025-07-12 13:39:33.625612 | orchestrator | 2025-07-12 13:39:33.625623 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 13:39:33.625634 | orchestrator | Saturday 12 July 2025 13:39:30 +0000 (0:00:00.215) 0:00:12.976 ********* 2025-07-12 13:39:33.625644 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:33.625655 | orchestrator | 2025-07-12 13:39:33.625666 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 13:39:33.625677 | orchestrator | 2025-07-12 13:39:33.625694 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:39:33.625735 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:02.240) 0:00:15.216 ********* 2025-07-12 13:39:33.625747 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:33.625757 | orchestrator | 2025-07-12 13:39:33.625768 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:39:33.625779 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:00.250) 0:00:15.467 ********* 2025-07-12 13:39:33.625790 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:33.625800 | orchestrator | 2025-07-12 13:39:33.625811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:33.625829 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:00.234) 0:00:15.701 ********* 2025-07-12 13:39:41.965991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 13:39:41.966156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 13:39:41.966172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 13:39:41.966184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 13:39:41.966195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 13:39:41.966207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 13:39:41.966218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 13:39:41.966228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 13:39:41.966239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 13:39:41.966250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 13:39:41.966261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 13:39:41.966272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 13:39:41.966283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 13:39:41.966294 | orchestrator | 2025-07-12 13:39:41.966306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966318 | orchestrator | Saturday 12 July 2025 13:39:33 +0000 (0:00:00.370) 0:00:16.072 ********* 2025-07-12 13:39:41.966329 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.966341 | orchestrator | 2025-07-12 13:39:41.966352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966363 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.199) 0:00:16.271 ********* 2025-07-12 13:39:41.966374 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.966385 | orchestrator | 2025-07-12 13:39:41.966396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966407 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.223) 0:00:16.494 ********* 2025-07-12 13:39:41.966418 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.966429 | orchestrator | 2025-07-12 13:39:41.966440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966451 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.202) 0:00:16.697 ********* 2025-07-12 13:39:41.966462 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.966473 | orchestrator | 2025-07-12 13:39:41.966484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966495 | orchestrator | Saturday 12 July 2025 13:39:34 +0000 (0:00:00.215) 0:00:16.912 ********* 2025-07-12 13:39:41.966506 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.966517 | orchestrator | 2025-07-12 13:39:41.966528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966564 | orchestrator | Saturday 12 July 2025 13:39:35 +0000 (0:00:00.204) 0:00:17.116 ********* 2025-07-12 13:39:41.966577 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.966589 | orchestrator | 2025-07-12 13:39:41.966602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966613 | orchestrator | Saturday 12 July 2025 13:39:35 +0000 (0:00:00.609) 0:00:17.726 ********* 2025-07-12 13:39:41.966625 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.966638 | orchestrator | 2025-07-12 13:39:41.966650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966663 | orchestrator | Saturday 12 July 2025 13:39:35 +0000 (0:00:00.244) 0:00:17.970 ********* 2025-07-12 13:39:41.966693 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.966706 | orchestrator | 2025-07-12 13:39:41.966747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966759 | orchestrator | Saturday 12 July 2025 13:39:36 +0000 (0:00:00.216) 0:00:18.187 ********* 2025-07-12 13:39:41.966772 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041) 2025-07-12 13:39:41.966785 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041) 2025-07-12 13:39:41.966797 | orchestrator | 2025-07-12 13:39:41.966810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966822 | orchestrator | Saturday 12 July 2025 13:39:36 +0000 (0:00:00.413) 0:00:18.601 ********* 2025-07-12 13:39:41.966834 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98) 2025-07-12 13:39:41.966845 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98) 2025-07-12 13:39:41.966858 | orchestrator | 2025-07-12 13:39:41.966871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966883 | orchestrator | Saturday 12 July 2025 13:39:37 +0000 (0:00:00.597) 0:00:19.198 ********* 2025-07-12 13:39:41.966895 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa) 2025-07-12 13:39:41.966905 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa) 2025-07-12 13:39:41.966916 | orchestrator | 2025-07-12 13:39:41.966927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.966937 | orchestrator | Saturday 12 July 2025 13:39:37 +0000 (0:00:00.558) 0:00:19.756 ********* 2025-07-12 13:39:41.966967 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1) 2025-07-12 13:39:41.966978 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1) 2025-07-12 13:39:41.966989 | orchestrator | 2025-07-12 13:39:41.967000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:41.967011 | orchestrator | Saturday 12 July 2025 13:39:38 +0000 (0:00:00.469) 0:00:20.226 ********* 2025-07-12 13:39:41.967022 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:39:41.967032 | orchestrator | 2025-07-12 13:39:41.967043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967054 | orchestrator | Saturday 12 July 2025 13:39:38 +0000 (0:00:00.393) 0:00:20.620 ********* 2025-07-12 13:39:41.967064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 13:39:41.967075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 13:39:41.967086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 13:39:41.967096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 13:39:41.967107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 13:39:41.967128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 13:39:41.967139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 13:39:41.967150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 13:39:41.967161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 13:39:41.967171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 13:39:41.967182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 13:39:41.967193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 13:39:41.967203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 13:39:41.967214 | orchestrator | 2025-07-12 13:39:41.967225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967236 | orchestrator | Saturday 12 July 2025 13:39:38 +0000 (0:00:00.387) 0:00:21.007 ********* 2025-07-12 13:39:41.967246 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.967257 | orchestrator | 2025-07-12 13:39:41.967268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967279 | orchestrator | Saturday 12 July 2025 13:39:39 +0000 (0:00:00.194) 0:00:21.202 ********* 2025-07-12 13:39:41.967290 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.967300 | orchestrator | 2025-07-12 13:39:41.967311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967322 | orchestrator | Saturday 12 July 2025 13:39:39 +0000 (0:00:00.758) 0:00:21.961 ********* 2025-07-12 13:39:41.967333 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.967344 | orchestrator | 2025-07-12 13:39:41.967355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967365 | orchestrator | Saturday 12 July 2025 13:39:40 +0000 (0:00:00.208) 0:00:22.169 ********* 2025-07-12 13:39:41.967376 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.967387 | orchestrator | 2025-07-12 13:39:41.967398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967409 | orchestrator | Saturday 12 July 2025 13:39:40 +0000 (0:00:00.211) 0:00:22.381 ********* 2025-07-12 13:39:41.967420 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.967431 | orchestrator | 2025-07-12 13:39:41.967447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967459 | orchestrator | Saturday 12 July 2025 13:39:40 +0000 (0:00:00.210) 0:00:22.592 ********* 2025-07-12 13:39:41.967470 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.967481 | orchestrator | 2025-07-12 13:39:41.967491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967502 | orchestrator | Saturday 12 July 2025 13:39:40 +0000 (0:00:00.203) 0:00:22.795 ********* 2025-07-12 13:39:41.967513 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.967528 | orchestrator | 2025-07-12 13:39:41.967547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967564 | orchestrator | Saturday 12 July 2025 13:39:40 +0000 (0:00:00.195) 0:00:22.990 ********* 2025-07-12 13:39:41.967581 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.967597 | orchestrator | 2025-07-12 13:39:41.967615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967633 | orchestrator | Saturday 12 July 2025 13:39:41 +0000 (0:00:00.225) 0:00:23.216 ********* 2025-07-12 13:39:41.967651 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 13:39:41.967670 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-12 13:39:41.967682 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-12 13:39:41.967701 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-12 13:39:41.967736 | orchestrator | 2025-07-12 13:39:41.967747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:41.967758 | orchestrator | Saturday 12 July 2025 13:39:41 +0000 (0:00:00.646) 0:00:23.863 ********* 2025-07-12 13:39:41.967769 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:41.967780 | orchestrator | 2025-07-12 13:39:41.967800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:48.451435 | orchestrator | Saturday 12 July 2025 13:39:41 +0000 (0:00:00.179) 0:00:24.042 ********* 2025-07-12 13:39:48.451550 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.451566 | orchestrator | 2025-07-12 13:39:48.451579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:48.451591 | orchestrator | Saturday 12 July 2025 13:39:42 +0000 (0:00:00.194) 0:00:24.237 ********* 2025-07-12 13:39:48.451602 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.451613 | orchestrator | 2025-07-12 13:39:48.451624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:48.451636 | orchestrator | Saturday 12 July 2025 13:39:42 +0000 (0:00:00.214) 0:00:24.451 ********* 2025-07-12 13:39:48.451646 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.451658 | orchestrator | 2025-07-12 13:39:48.451670 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 13:39:48.451681 | orchestrator | Saturday 12 July 2025 13:39:42 +0000 (0:00:00.226) 0:00:24.678 ********* 2025-07-12 13:39:48.451692 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-07-12 13:39:48.451702 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-07-12 13:39:48.451768 | orchestrator | 2025-07-12 13:39:48.451780 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 13:39:48.451791 | orchestrator | Saturday 12 July 2025 13:39:42 +0000 (0:00:00.358) 0:00:25.036 ********* 2025-07-12 13:39:48.451802 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.451813 | orchestrator | 2025-07-12 13:39:48.451824 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 13:39:48.451834 | orchestrator | Saturday 12 July 2025 13:39:43 +0000 (0:00:00.137) 0:00:25.174 ********* 2025-07-12 13:39:48.451845 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.451856 | orchestrator | 2025-07-12 13:39:48.451867 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 13:39:48.451877 | orchestrator | Saturday 12 July 2025 13:39:43 +0000 (0:00:00.144) 0:00:25.319 ********* 2025-07-12 13:39:48.451888 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.451899 | orchestrator | 2025-07-12 13:39:48.451910 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 13:39:48.451920 | orchestrator | Saturday 12 July 2025 13:39:43 +0000 (0:00:00.138) 0:00:25.458 ********* 2025-07-12 13:39:48.451931 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:48.451943 | orchestrator | 2025-07-12 13:39:48.451954 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 13:39:48.451967 | orchestrator | Saturday 12 July 2025 13:39:43 +0000 (0:00:00.141) 0:00:25.599 ********* 2025-07-12 13:39:48.451979 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88c8806-82e1-5c41-a829-e62dc4a8fdb6'}}) 2025-07-12 13:39:48.451993 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbedf305-2fae-5605-926c-96a21a5245d1'}}) 2025-07-12 13:39:48.452006 | orchestrator | 2025-07-12 13:39:48.452018 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 13:39:48.452030 | orchestrator | Saturday 12 July 2025 13:39:43 +0000 (0:00:00.175) 0:00:25.775 ********* 2025-07-12 13:39:48.452044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88c8806-82e1-5c41-a829-e62dc4a8fdb6'}})  2025-07-12 13:39:48.452058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbedf305-2fae-5605-926c-96a21a5245d1'}})  2025-07-12 13:39:48.452094 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.452108 | orchestrator | 2025-07-12 13:39:48.452120 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 13:39:48.452132 | orchestrator | Saturday 12 July 2025 13:39:43 +0000 (0:00:00.160) 0:00:25.935 ********* 2025-07-12 13:39:48.452144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88c8806-82e1-5c41-a829-e62dc4a8fdb6'}})  2025-07-12 13:39:48.452156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbedf305-2fae-5605-926c-96a21a5245d1'}})  2025-07-12 13:39:48.452169 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.452181 | orchestrator | 2025-07-12 13:39:48.452193 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 13:39:48.452205 | orchestrator | Saturday 12 July 2025 13:39:43 +0000 (0:00:00.145) 0:00:26.080 ********* 2025-07-12 13:39:48.452217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88c8806-82e1-5c41-a829-e62dc4a8fdb6'}})  2025-07-12 13:39:48.452229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbedf305-2fae-5605-926c-96a21a5245d1'}})  2025-07-12 13:39:48.452242 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.452254 | orchestrator | 2025-07-12 13:39:48.452266 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 13:39:48.452302 | orchestrator | Saturday 12 July 2025 13:39:44 +0000 (0:00:00.155) 0:00:26.235 ********* 2025-07-12 13:39:48.452316 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:48.452328 | orchestrator | 2025-07-12 13:39:48.452339 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 13:39:48.452350 | orchestrator | Saturday 12 July 2025 13:39:44 +0000 (0:00:00.136) 0:00:26.372 ********* 2025-07-12 13:39:48.452361 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:39:48.452372 | orchestrator | 2025-07-12 13:39:48.452383 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 13:39:48.452394 | orchestrator | Saturday 12 July 2025 13:39:44 +0000 (0:00:00.143) 0:00:26.516 ********* 2025-07-12 13:39:48.452405 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.452416 | orchestrator | 2025-07-12 13:39:48.452444 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 13:39:48.452456 | orchestrator | Saturday 12 July 2025 13:39:44 +0000 (0:00:00.135) 0:00:26.651 ********* 2025-07-12 13:39:48.452466 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.452477 | orchestrator | 2025-07-12 13:39:48.452488 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 13:39:48.452499 | orchestrator | Saturday 12 July 2025 13:39:44 +0000 (0:00:00.321) 0:00:26.973 ********* 2025-07-12 13:39:48.452510 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.452521 | orchestrator | 2025-07-12 13:39:48.452532 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 13:39:48.452543 | orchestrator | Saturday 12 July 2025 13:39:45 +0000 (0:00:00.139) 0:00:27.113 ********* 2025-07-12 13:39:48.452554 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:39:48.452564 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:48.452575 | orchestrator |  "sdb": { 2025-07-12 13:39:48.452587 | orchestrator |  "osd_lvm_uuid": "f88c8806-82e1-5c41-a829-e62dc4a8fdb6" 2025-07-12 13:39:48.452598 | orchestrator |  }, 2025-07-12 13:39:48.452608 | orchestrator |  "sdc": { 2025-07-12 13:39:48.452619 | orchestrator |  "osd_lvm_uuid": "fbedf305-2fae-5605-926c-96a21a5245d1" 2025-07-12 13:39:48.452630 | orchestrator |  } 2025-07-12 13:39:48.452641 | orchestrator |  } 2025-07-12 13:39:48.452652 | orchestrator | } 2025-07-12 13:39:48.452663 | orchestrator | 2025-07-12 13:39:48.452674 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 13:39:48.452685 | orchestrator | Saturday 12 July 2025 13:39:45 +0000 (0:00:00.152) 0:00:27.265 ********* 2025-07-12 13:39:48.452703 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.452730 | orchestrator | 2025-07-12 13:39:48.452742 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 13:39:48.452753 | orchestrator | Saturday 12 July 2025 13:39:45 +0000 (0:00:00.141) 0:00:27.406 ********* 2025-07-12 13:39:48.452764 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.452774 | orchestrator | 2025-07-12 13:39:48.452785 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 13:39:48.452796 | orchestrator | Saturday 12 July 2025 13:39:45 +0000 (0:00:00.138) 0:00:27.545 ********* 2025-07-12 13:39:48.452807 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:39:48.452818 | orchestrator | 2025-07-12 13:39:48.452829 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 13:39:48.452839 | orchestrator | Saturday 12 July 2025 13:39:45 +0000 (0:00:00.134) 0:00:27.680 ********* 2025-07-12 13:39:48.452850 | orchestrator | changed: [testbed-node-4] => { 2025-07-12 13:39:48.452861 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 13:39:48.452872 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:39:48.452883 | orchestrator |  "sdb": { 2025-07-12 13:39:48.452894 | orchestrator |  "osd_lvm_uuid": "f88c8806-82e1-5c41-a829-e62dc4a8fdb6" 2025-07-12 13:39:48.452905 | orchestrator |  }, 2025-07-12 13:39:48.452916 | orchestrator |  "sdc": { 2025-07-12 13:39:48.452926 | orchestrator |  "osd_lvm_uuid": "fbedf305-2fae-5605-926c-96a21a5245d1" 2025-07-12 13:39:48.452937 | orchestrator |  } 2025-07-12 13:39:48.452948 | orchestrator |  }, 2025-07-12 13:39:48.452959 | orchestrator |  "lvm_volumes": [ 2025-07-12 13:39:48.452969 | orchestrator |  { 2025-07-12 13:39:48.452980 | orchestrator |  "data": "osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6", 2025-07-12 13:39:48.452991 | orchestrator |  "data_vg": "ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6" 2025-07-12 13:39:48.453002 | orchestrator |  }, 2025-07-12 13:39:48.453013 | orchestrator |  { 2025-07-12 13:39:48.453024 | orchestrator |  "data": "osd-block-fbedf305-2fae-5605-926c-96a21a5245d1", 2025-07-12 13:39:48.453035 | orchestrator |  "data_vg": "ceph-fbedf305-2fae-5605-926c-96a21a5245d1" 2025-07-12 13:39:48.453045 | orchestrator |  } 2025-07-12 13:39:48.453056 | orchestrator |  ] 2025-07-12 13:39:48.453067 | orchestrator |  } 2025-07-12 13:39:48.453078 | orchestrator | } 2025-07-12 13:39:48.453089 | orchestrator | 2025-07-12 13:39:48.453100 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 13:39:48.453110 | orchestrator | Saturday 12 July 2025 13:39:45 +0000 (0:00:00.219) 0:00:27.899 ********* 2025-07-12 13:39:48.453121 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:48.453132 | orchestrator | 2025-07-12 13:39:48.453143 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 13:39:48.453154 | orchestrator | 2025-07-12 13:39:48.453165 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:39:48.453175 | orchestrator | Saturday 12 July 2025 13:39:46 +0000 (0:00:01.124) 0:00:29.023 ********* 2025-07-12 13:39:48.453186 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 13:39:48.453197 | orchestrator | 2025-07-12 13:39:48.453208 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:39:48.453218 | orchestrator | Saturday 12 July 2025 13:39:47 +0000 (0:00:00.471) 0:00:29.496 ********* 2025-07-12 13:39:48.453229 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:39:48.453240 | orchestrator | 2025-07-12 13:39:48.453251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:48.453262 | orchestrator | Saturday 12 July 2025 13:39:48 +0000 (0:00:00.663) 0:00:30.159 ********* 2025-07-12 13:39:48.453273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 13:39:48.453290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 13:39:48.453301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 13:39:48.453312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 13:39:48.453322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 13:39:48.453333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 13:39:48.453350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 13:39:56.736065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 13:39:56.736169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 13:39:56.736182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 13:39:56.736191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 13:39:56.736200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 13:39:56.736209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 13:39:56.736218 | orchestrator | 2025-07-12 13:39:56.736230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736262 | orchestrator | Saturday 12 July 2025 13:39:48 +0000 (0:00:00.365) 0:00:30.524 ********* 2025-07-12 13:39:56.736274 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.736286 | orchestrator | 2025-07-12 13:39:56.736297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736308 | orchestrator | Saturday 12 July 2025 13:39:48 +0000 (0:00:00.196) 0:00:30.720 ********* 2025-07-12 13:39:56.736319 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.736330 | orchestrator | 2025-07-12 13:39:56.736341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736352 | orchestrator | Saturday 12 July 2025 13:39:48 +0000 (0:00:00.203) 0:00:30.924 ********* 2025-07-12 13:39:56.736363 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.736374 | orchestrator | 2025-07-12 13:39:56.736384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736396 | orchestrator | Saturday 12 July 2025 13:39:49 +0000 (0:00:00.205) 0:00:31.129 ********* 2025-07-12 13:39:56.736407 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.736418 | orchestrator | 2025-07-12 13:39:56.736429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736440 | orchestrator | Saturday 12 July 2025 13:39:49 +0000 (0:00:00.207) 0:00:31.337 ********* 2025-07-12 13:39:56.736450 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.736461 | orchestrator | 2025-07-12 13:39:56.736472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736483 | orchestrator | Saturday 12 July 2025 13:39:49 +0000 (0:00:00.199) 0:00:31.536 ********* 2025-07-12 13:39:56.736494 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.736506 | orchestrator | 2025-07-12 13:39:56.736517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736528 | orchestrator | Saturday 12 July 2025 13:39:49 +0000 (0:00:00.199) 0:00:31.735 ********* 2025-07-12 13:39:56.736539 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.736550 | orchestrator | 2025-07-12 13:39:56.736561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736572 | orchestrator | Saturday 12 July 2025 13:39:49 +0000 (0:00:00.230) 0:00:31.966 ********* 2025-07-12 13:39:56.736583 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.736594 | orchestrator | 2025-07-12 13:39:56.736605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736640 | orchestrator | Saturday 12 July 2025 13:39:50 +0000 (0:00:00.199) 0:00:32.165 ********* 2025-07-12 13:39:56.736652 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92) 2025-07-12 13:39:56.736664 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92) 2025-07-12 13:39:56.736675 | orchestrator | 2025-07-12 13:39:56.736686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736733 | orchestrator | Saturday 12 July 2025 13:39:50 +0000 (0:00:00.625) 0:00:32.790 ********* 2025-07-12 13:39:56.736746 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094) 2025-07-12 13:39:56.736757 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094) 2025-07-12 13:39:56.736768 | orchestrator | 2025-07-12 13:39:56.736779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736790 | orchestrator | Saturday 12 July 2025 13:39:51 +0000 (0:00:00.835) 0:00:33.625 ********* 2025-07-12 13:39:56.736800 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f) 2025-07-12 13:39:56.736811 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f) 2025-07-12 13:39:56.736822 | orchestrator | 2025-07-12 13:39:56.736833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736844 | orchestrator | Saturday 12 July 2025 13:39:51 +0000 (0:00:00.412) 0:00:34.038 ********* 2025-07-12 13:39:56.736855 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29) 2025-07-12 13:39:56.736866 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29) 2025-07-12 13:39:56.736876 | orchestrator | 2025-07-12 13:39:56.736887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:39:56.736898 | orchestrator | Saturday 12 July 2025 13:39:52 +0000 (0:00:00.415) 0:00:34.453 ********* 2025-07-12 13:39:56.736909 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:39:56.736920 | orchestrator | 2025-07-12 13:39:56.736931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.736942 | orchestrator | Saturday 12 July 2025 13:39:52 +0000 (0:00:00.326) 0:00:34.780 ********* 2025-07-12 13:39:56.736971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 13:39:56.736983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 13:39:56.736994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 13:39:56.737005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 13:39:56.737016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 13:39:56.737027 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 13:39:56.737037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 13:39:56.737048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 13:39:56.737059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 13:39:56.737070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 13:39:56.737080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 13:39:56.737091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 13:39:56.737110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 13:39:56.737121 | orchestrator | 2025-07-12 13:39:56.737132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737143 | orchestrator | Saturday 12 July 2025 13:39:53 +0000 (0:00:00.366) 0:00:35.147 ********* 2025-07-12 13:39:56.737154 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737165 | orchestrator | 2025-07-12 13:39:56.737176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737187 | orchestrator | Saturday 12 July 2025 13:39:53 +0000 (0:00:00.194) 0:00:35.341 ********* 2025-07-12 13:39:56.737198 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737209 | orchestrator | 2025-07-12 13:39:56.737220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737230 | orchestrator | Saturday 12 July 2025 13:39:53 +0000 (0:00:00.209) 0:00:35.550 ********* 2025-07-12 13:39:56.737241 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737252 | orchestrator | 2025-07-12 13:39:56.737263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737274 | orchestrator | Saturday 12 July 2025 13:39:53 +0000 (0:00:00.218) 0:00:35.769 ********* 2025-07-12 13:39:56.737285 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737296 | orchestrator | 2025-07-12 13:39:56.737306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737317 | orchestrator | Saturday 12 July 2025 13:39:53 +0000 (0:00:00.189) 0:00:35.958 ********* 2025-07-12 13:39:56.737328 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737339 | orchestrator | 2025-07-12 13:39:56.737350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737361 | orchestrator | Saturday 12 July 2025 13:39:54 +0000 (0:00:00.238) 0:00:36.197 ********* 2025-07-12 13:39:56.737372 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737383 | orchestrator | 2025-07-12 13:39:56.737393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737410 | orchestrator | Saturday 12 July 2025 13:39:54 +0000 (0:00:00.659) 0:00:36.857 ********* 2025-07-12 13:39:56.737421 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737432 | orchestrator | 2025-07-12 13:39:56.737443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737454 | orchestrator | Saturday 12 July 2025 13:39:54 +0000 (0:00:00.204) 0:00:37.061 ********* 2025-07-12 13:39:56.737465 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737475 | orchestrator | 2025-07-12 13:39:56.737486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737497 | orchestrator | Saturday 12 July 2025 13:39:55 +0000 (0:00:00.193) 0:00:37.254 ********* 2025-07-12 13:39:56.737508 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 13:39:56.737519 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-12 13:39:56.737529 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-12 13:39:56.737540 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-12 13:39:56.737551 | orchestrator | 2025-07-12 13:39:56.737562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737573 | orchestrator | Saturday 12 July 2025 13:39:55 +0000 (0:00:00.706) 0:00:37.961 ********* 2025-07-12 13:39:56.737584 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737594 | orchestrator | 2025-07-12 13:39:56.737605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737622 | orchestrator | Saturday 12 July 2025 13:39:56 +0000 (0:00:00.204) 0:00:38.166 ********* 2025-07-12 13:39:56.737633 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737644 | orchestrator | 2025-07-12 13:39:56.737655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737666 | orchestrator | Saturday 12 July 2025 13:39:56 +0000 (0:00:00.212) 0:00:38.378 ********* 2025-07-12 13:39:56.737684 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737695 | orchestrator | 2025-07-12 13:39:56.737706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:39:56.737733 | orchestrator | Saturday 12 July 2025 13:39:56 +0000 (0:00:00.208) 0:00:38.587 ********* 2025-07-12 13:39:56.737745 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:39:56.737755 | orchestrator | 2025-07-12 13:39:56.737766 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 13:39:56.737783 | orchestrator | Saturday 12 July 2025 13:39:56 +0000 (0:00:00.222) 0:00:38.809 ********* 2025-07-12 13:40:00.856630 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-07-12 13:40:00.856789 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-07-12 13:40:00.856807 | orchestrator | 2025-07-12 13:40:00.856821 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 13:40:00.856833 | orchestrator | Saturday 12 July 2025 13:39:56 +0000 (0:00:00.176) 0:00:38.986 ********* 2025-07-12 13:40:00.856845 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.856856 | orchestrator | 2025-07-12 13:40:00.856868 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 13:40:00.856880 | orchestrator | Saturday 12 July 2025 13:39:57 +0000 (0:00:00.127) 0:00:39.113 ********* 2025-07-12 13:40:00.856891 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.856902 | orchestrator | 2025-07-12 13:40:00.856914 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 13:40:00.856925 | orchestrator | Saturday 12 July 2025 13:39:57 +0000 (0:00:00.135) 0:00:39.249 ********* 2025-07-12 13:40:00.856936 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.856948 | orchestrator | 2025-07-12 13:40:00.856959 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 13:40:00.856970 | orchestrator | Saturday 12 July 2025 13:39:57 +0000 (0:00:00.134) 0:00:39.384 ********* 2025-07-12 13:40:00.856982 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:40:00.856994 | orchestrator | 2025-07-12 13:40:00.857005 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 13:40:00.857016 | orchestrator | Saturday 12 July 2025 13:39:57 +0000 (0:00:00.320) 0:00:39.704 ********* 2025-07-12 13:40:00.857029 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2177925c-0e94-5467-9f04-b37733dbe47a'}}) 2025-07-12 13:40:00.857041 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10b3d195-009d-5006-b5f6-1b7aa1316d97'}}) 2025-07-12 13:40:00.857052 | orchestrator | 2025-07-12 13:40:00.857064 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 13:40:00.857075 | orchestrator | Saturday 12 July 2025 13:39:57 +0000 (0:00:00.169) 0:00:39.873 ********* 2025-07-12 13:40:00.857087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2177925c-0e94-5467-9f04-b37733dbe47a'}})  2025-07-12 13:40:00.857100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10b3d195-009d-5006-b5f6-1b7aa1316d97'}})  2025-07-12 13:40:00.857111 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.857122 | orchestrator | 2025-07-12 13:40:00.857134 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 13:40:00.857146 | orchestrator | Saturday 12 July 2025 13:39:57 +0000 (0:00:00.155) 0:00:40.029 ********* 2025-07-12 13:40:00.857159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2177925c-0e94-5467-9f04-b37733dbe47a'}})  2025-07-12 13:40:00.857172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10b3d195-009d-5006-b5f6-1b7aa1316d97'}})  2025-07-12 13:40:00.857185 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.857197 | orchestrator | 2025-07-12 13:40:00.857210 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 13:40:00.857246 | orchestrator | Saturday 12 July 2025 13:39:58 +0000 (0:00:00.153) 0:00:40.182 ********* 2025-07-12 13:40:00.857259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2177925c-0e94-5467-9f04-b37733dbe47a'}})  2025-07-12 13:40:00.857271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10b3d195-009d-5006-b5f6-1b7aa1316d97'}})  2025-07-12 13:40:00.857283 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.857296 | orchestrator | 2025-07-12 13:40:00.857309 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 13:40:00.857321 | orchestrator | Saturday 12 July 2025 13:39:58 +0000 (0:00:00.149) 0:00:40.332 ********* 2025-07-12 13:40:00.857333 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:40:00.857346 | orchestrator | 2025-07-12 13:40:00.857357 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 13:40:00.857368 | orchestrator | Saturday 12 July 2025 13:39:58 +0000 (0:00:00.143) 0:00:40.476 ********* 2025-07-12 13:40:00.857379 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:40:00.857391 | orchestrator | 2025-07-12 13:40:00.857402 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 13:40:00.857413 | orchestrator | Saturday 12 July 2025 13:39:58 +0000 (0:00:00.145) 0:00:40.621 ********* 2025-07-12 13:40:00.857424 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.857435 | orchestrator | 2025-07-12 13:40:00.857446 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 13:40:00.857457 | orchestrator | Saturday 12 July 2025 13:39:58 +0000 (0:00:00.139) 0:00:40.760 ********* 2025-07-12 13:40:00.857468 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.857479 | orchestrator | 2025-07-12 13:40:00.857491 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 13:40:00.857502 | orchestrator | Saturday 12 July 2025 13:39:58 +0000 (0:00:00.123) 0:00:40.883 ********* 2025-07-12 13:40:00.857513 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.857524 | orchestrator | 2025-07-12 13:40:00.857536 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 13:40:00.857547 | orchestrator | Saturday 12 July 2025 13:39:58 +0000 (0:00:00.142) 0:00:41.026 ********* 2025-07-12 13:40:00.857558 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:40:00.857569 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:40:00.857580 | orchestrator |  "sdb": { 2025-07-12 13:40:00.857596 | orchestrator |  "osd_lvm_uuid": "2177925c-0e94-5467-9f04-b37733dbe47a" 2025-07-12 13:40:00.857627 | orchestrator |  }, 2025-07-12 13:40:00.857639 | orchestrator |  "sdc": { 2025-07-12 13:40:00.857650 | orchestrator |  "osd_lvm_uuid": "10b3d195-009d-5006-b5f6-1b7aa1316d97" 2025-07-12 13:40:00.857661 | orchestrator |  } 2025-07-12 13:40:00.857672 | orchestrator |  } 2025-07-12 13:40:00.857683 | orchestrator | } 2025-07-12 13:40:00.857695 | orchestrator | 2025-07-12 13:40:00.857706 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 13:40:00.857745 | orchestrator | Saturday 12 July 2025 13:39:59 +0000 (0:00:00.135) 0:00:41.161 ********* 2025-07-12 13:40:00.857757 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.857769 | orchestrator | 2025-07-12 13:40:00.857780 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 13:40:00.857791 | orchestrator | Saturday 12 July 2025 13:39:59 +0000 (0:00:00.135) 0:00:41.296 ********* 2025-07-12 13:40:00.857802 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.857813 | orchestrator | 2025-07-12 13:40:00.857824 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 13:40:00.857835 | orchestrator | Saturday 12 July 2025 13:39:59 +0000 (0:00:00.340) 0:00:41.636 ********* 2025-07-12 13:40:00.857846 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:40:00.857857 | orchestrator | 2025-07-12 13:40:00.857868 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 13:40:00.857879 | orchestrator | Saturday 12 July 2025 13:39:59 +0000 (0:00:00.143) 0:00:41.780 ********* 2025-07-12 13:40:00.857899 | orchestrator | changed: [testbed-node-5] => { 2025-07-12 13:40:00.857911 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 13:40:00.857940 | orchestrator |  "ceph_osd_devices": { 2025-07-12 13:40:00.857952 | orchestrator |  "sdb": { 2025-07-12 13:40:00.857963 | orchestrator |  "osd_lvm_uuid": "2177925c-0e94-5467-9f04-b37733dbe47a" 2025-07-12 13:40:00.857974 | orchestrator |  }, 2025-07-12 13:40:00.857986 | orchestrator |  "sdc": { 2025-07-12 13:40:00.857997 | orchestrator |  "osd_lvm_uuid": "10b3d195-009d-5006-b5f6-1b7aa1316d97" 2025-07-12 13:40:00.858008 | orchestrator |  } 2025-07-12 13:40:00.858066 | orchestrator |  }, 2025-07-12 13:40:00.858079 | orchestrator |  "lvm_volumes": [ 2025-07-12 13:40:00.858090 | orchestrator |  { 2025-07-12 13:40:00.858102 | orchestrator |  "data": "osd-block-2177925c-0e94-5467-9f04-b37733dbe47a", 2025-07-12 13:40:00.858113 | orchestrator |  "data_vg": "ceph-2177925c-0e94-5467-9f04-b37733dbe47a" 2025-07-12 13:40:00.858124 | orchestrator |  }, 2025-07-12 13:40:00.858135 | orchestrator |  { 2025-07-12 13:40:00.858146 | orchestrator |  "data": "osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97", 2025-07-12 13:40:00.858158 | orchestrator |  "data_vg": "ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97" 2025-07-12 13:40:00.858169 | orchestrator |  } 2025-07-12 13:40:00.858180 | orchestrator |  ] 2025-07-12 13:40:00.858191 | orchestrator |  } 2025-07-12 13:40:00.858202 | orchestrator | } 2025-07-12 13:40:00.858213 | orchestrator | 2025-07-12 13:40:00.858224 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 13:40:00.858235 | orchestrator | Saturday 12 July 2025 13:39:59 +0000 (0:00:00.201) 0:00:41.981 ********* 2025-07-12 13:40:00.858247 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 13:40:00.858258 | orchestrator | 2025-07-12 13:40:00.858269 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:40:00.858281 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 13:40:00.858293 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 13:40:00.858304 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 13:40:00.858316 | orchestrator | 2025-07-12 13:40:00.858327 | orchestrator | 2025-07-12 13:40:00.858338 | orchestrator | 2025-07-12 13:40:00.858349 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:40:00.858361 | orchestrator | Saturday 12 July 2025 13:40:00 +0000 (0:00:00.942) 0:00:42.924 ********* 2025-07-12 13:40:00.858372 | orchestrator | =============================================================================== 2025-07-12 13:40:00.858383 | orchestrator | Write configuration file ------------------------------------------------ 4.31s 2025-07-12 13:40:00.858394 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2025-07-12 13:40:00.858405 | orchestrator | Get initial list of available block devices ----------------------------- 1.11s 2025-07-12 13:40:00.858416 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-07-12 13:40:00.858427 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-07-12 13:40:00.858438 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.97s 2025-07-12 13:40:00.858455 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2025-07-12 13:40:00.858467 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-07-12 13:40:00.858478 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-07-12 13:40:00.858497 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-07-12 13:40:00.858508 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.70s 2025-07-12 13:40:00.858519 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.68s 2025-07-12 13:40:00.858530 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-07-12 13:40:00.858541 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-07-12 13:40:00.858560 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-07-12 13:40:01.189672 | orchestrator | Print configuration data ------------------------------------------------ 0.64s 2025-07-12 13:40:01.189818 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-07-12 13:40:01.189834 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-07-12 13:40:01.189846 | orchestrator | Print DB devices -------------------------------------------------------- 0.62s 2025-07-12 13:40:01.189857 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-07-12 13:40:23.581239 | orchestrator | 2025-07-12 13:40:23 | INFO  | Task b946f6b6-02ca-4e5b-bfb7-ff0081a05c24 (sync inventory) is running in background. Output coming soon. 2025-07-12 13:40:42.362586 | orchestrator | 2025-07-12 13:40:24 | INFO  | Starting group_vars file reorganization 2025-07-12 13:40:42.362765 | orchestrator | 2025-07-12 13:40:24 | INFO  | Moved 0 file(s) to their respective directories 2025-07-12 13:40:42.362785 | orchestrator | 2025-07-12 13:40:24 | INFO  | Group_vars file reorganization completed 2025-07-12 13:40:42.362797 | orchestrator | 2025-07-12 13:40:27 | INFO  | Starting variable preparation from inventory 2025-07-12 13:40:42.362809 | orchestrator | 2025-07-12 13:40:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-07-12 13:40:42.362821 | orchestrator | 2025-07-12 13:40:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-07-12 13:40:42.362832 | orchestrator | 2025-07-12 13:40:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-07-12 13:40:42.362843 | orchestrator | 2025-07-12 13:40:28 | INFO  | 3 file(s) written, 6 host(s) processed 2025-07-12 13:40:42.362854 | orchestrator | 2025-07-12 13:40:28 | INFO  | Variable preparation completed 2025-07-12 13:40:42.362865 | orchestrator | 2025-07-12 13:40:29 | INFO  | Starting inventory overwrite handling 2025-07-12 13:40:42.362877 | orchestrator | 2025-07-12 13:40:29 | INFO  | Handling group overwrites in 99-overwrite 2025-07-12 13:40:42.362888 | orchestrator | 2025-07-12 13:40:29 | INFO  | Removing group frr:children from 60-generic 2025-07-12 13:40:42.362899 | orchestrator | 2025-07-12 13:40:29 | INFO  | Removing group storage:children from 50-kolla 2025-07-12 13:40:42.362910 | orchestrator | 2025-07-12 13:40:29 | INFO  | Removing group netbird:children from 50-infrastruture 2025-07-12 13:40:42.362921 | orchestrator | 2025-07-12 13:40:29 | INFO  | Removing group ceph-mds from 50-ceph 2025-07-12 13:40:42.362935 | orchestrator | 2025-07-12 13:40:29 | INFO  | Removing group ceph-rgw from 50-ceph 2025-07-12 13:40:42.362948 | orchestrator | 2025-07-12 13:40:29 | INFO  | Handling group overwrites in 20-roles 2025-07-12 13:40:42.362960 | orchestrator | 2025-07-12 13:40:29 | INFO  | Removing group k3s_node from 50-infrastruture 2025-07-12 13:40:42.362973 | orchestrator | 2025-07-12 13:40:29 | INFO  | Removed 6 group(s) in total 2025-07-12 13:40:42.362986 | orchestrator | 2025-07-12 13:40:29 | INFO  | Inventory overwrite handling completed 2025-07-12 13:40:42.362998 | orchestrator | 2025-07-12 13:40:30 | INFO  | Starting merge of inventory files 2025-07-12 13:40:42.363049 | orchestrator | 2025-07-12 13:40:30 | INFO  | Inventory files merged successfully 2025-07-12 13:40:42.363062 | orchestrator | 2025-07-12 13:40:34 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-07-12 13:40:42.363075 | orchestrator | 2025-07-12 13:40:41 | INFO  | Successfully wrote ClusterShell configuration 2025-07-12 13:40:42.363088 | orchestrator | [master ab48aec] 2025-07-12-13-40 2025-07-12 13:40:42.363101 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-07-12 13:40:44.485967 | orchestrator | 2025-07-12 13:40:44 | INFO  | Task 53cfbd0d-8b25-406a-884e-79a6ec6e82bf (ceph-create-lvm-devices) was prepared for execution. 2025-07-12 13:40:44.486125 | orchestrator | 2025-07-12 13:40:44 | INFO  | It takes a moment until task 53cfbd0d-8b25-406a-884e-79a6ec6e82bf (ceph-create-lvm-devices) has been started and output is visible here. 2025-07-12 13:40:56.048239 | orchestrator | 2025-07-12 13:40:56.048355 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 13:40:56.048372 | orchestrator | 2025-07-12 13:40:56.048384 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:40:56.048396 | orchestrator | Saturday 12 July 2025 13:40:48 +0000 (0:00:00.307) 0:00:00.307 ********* 2025-07-12 13:40:56.048408 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 13:40:56.048419 | orchestrator | 2025-07-12 13:40:56.048430 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:40:56.048441 | orchestrator | Saturday 12 July 2025 13:40:48 +0000 (0:00:00.231) 0:00:00.538 ********* 2025-07-12 13:40:56.048452 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:40:56.048464 | orchestrator | 2025-07-12 13:40:56.048475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.048486 | orchestrator | Saturday 12 July 2025 13:40:49 +0000 (0:00:00.212) 0:00:00.751 ********* 2025-07-12 13:40:56.048497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 13:40:56.048508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 13:40:56.048519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 13:40:56.048550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 13:40:56.048562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 13:40:56.048572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 13:40:56.048584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 13:40:56.048651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 13:40:56.048662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 13:40:56.048673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 13:40:56.048685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 13:40:56.048696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 13:40:56.048707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 13:40:56.048718 | orchestrator | 2025-07-12 13:40:56.048729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.048742 | orchestrator | Saturday 12 July 2025 13:40:49 +0000 (0:00:00.396) 0:00:01.148 ********* 2025-07-12 13:40:56.048755 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.048767 | orchestrator | 2025-07-12 13:40:56.048780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.048819 | orchestrator | Saturday 12 July 2025 13:40:49 +0000 (0:00:00.450) 0:00:01.598 ********* 2025-07-12 13:40:56.048832 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.048845 | orchestrator | 2025-07-12 13:40:56.048858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.048871 | orchestrator | Saturday 12 July 2025 13:40:50 +0000 (0:00:00.203) 0:00:01.801 ********* 2025-07-12 13:40:56.048883 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.048896 | orchestrator | 2025-07-12 13:40:56.048908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.048921 | orchestrator | Saturday 12 July 2025 13:40:50 +0000 (0:00:00.203) 0:00:02.005 ********* 2025-07-12 13:40:56.048934 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.048946 | orchestrator | 2025-07-12 13:40:56.048959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.048972 | orchestrator | Saturday 12 July 2025 13:40:50 +0000 (0:00:00.195) 0:00:02.200 ********* 2025-07-12 13:40:56.048985 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.048998 | orchestrator | 2025-07-12 13:40:56.049011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.049023 | orchestrator | Saturday 12 July 2025 13:40:50 +0000 (0:00:00.197) 0:00:02.398 ********* 2025-07-12 13:40:56.049035 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049047 | orchestrator | 2025-07-12 13:40:56.049060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.049073 | orchestrator | Saturday 12 July 2025 13:40:50 +0000 (0:00:00.193) 0:00:02.591 ********* 2025-07-12 13:40:56.049086 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049097 | orchestrator | 2025-07-12 13:40:56.049109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.049120 | orchestrator | Saturday 12 July 2025 13:40:51 +0000 (0:00:00.204) 0:00:02.795 ********* 2025-07-12 13:40:56.049130 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049141 | orchestrator | 2025-07-12 13:40:56.049152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.049163 | orchestrator | Saturday 12 July 2025 13:40:51 +0000 (0:00:00.190) 0:00:02.986 ********* 2025-07-12 13:40:56.049174 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49) 2025-07-12 13:40:56.049186 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49) 2025-07-12 13:40:56.049196 | orchestrator | 2025-07-12 13:40:56.049207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.049218 | orchestrator | Saturday 12 July 2025 13:40:51 +0000 (0:00:00.415) 0:00:03.401 ********* 2025-07-12 13:40:56.049254 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830) 2025-07-12 13:40:56.049266 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830) 2025-07-12 13:40:56.049277 | orchestrator | 2025-07-12 13:40:56.049288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.049298 | orchestrator | Saturday 12 July 2025 13:40:52 +0000 (0:00:00.412) 0:00:03.814 ********* 2025-07-12 13:40:56.049309 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767) 2025-07-12 13:40:56.049320 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767) 2025-07-12 13:40:56.049331 | orchestrator | 2025-07-12 13:40:56.049341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.049352 | orchestrator | Saturday 12 July 2025 13:40:52 +0000 (0:00:00.592) 0:00:04.406 ********* 2025-07-12 13:40:56.049363 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac) 2025-07-12 13:40:56.049373 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac) 2025-07-12 13:40:56.049391 | orchestrator | 2025-07-12 13:40:56.049402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:40:56.049413 | orchestrator | Saturday 12 July 2025 13:40:53 +0000 (0:00:00.616) 0:00:05.023 ********* 2025-07-12 13:40:56.049424 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:40:56.049435 | orchestrator | 2025-07-12 13:40:56.049445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:56.049456 | orchestrator | Saturday 12 July 2025 13:40:54 +0000 (0:00:00.701) 0:00:05.724 ********* 2025-07-12 13:40:56.049467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 13:40:56.049477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 13:40:56.049488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 13:40:56.049498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 13:40:56.049509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 13:40:56.049520 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 13:40:56.049531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 13:40:56.049541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 13:40:56.049552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 13:40:56.049563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 13:40:56.049573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 13:40:56.049584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 13:40:56.049611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 13:40:56.049623 | orchestrator | 2025-07-12 13:40:56.049634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:56.049644 | orchestrator | Saturday 12 July 2025 13:40:54 +0000 (0:00:00.410) 0:00:06.135 ********* 2025-07-12 13:40:56.049655 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049666 | orchestrator | 2025-07-12 13:40:56.049677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:56.049688 | orchestrator | Saturday 12 July 2025 13:40:54 +0000 (0:00:00.200) 0:00:06.335 ********* 2025-07-12 13:40:56.049699 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049709 | orchestrator | 2025-07-12 13:40:56.049720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:56.049731 | orchestrator | Saturday 12 July 2025 13:40:54 +0000 (0:00:00.178) 0:00:06.513 ********* 2025-07-12 13:40:56.049742 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049753 | orchestrator | 2025-07-12 13:40:56.049764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:56.049775 | orchestrator | Saturday 12 July 2025 13:40:55 +0000 (0:00:00.206) 0:00:06.719 ********* 2025-07-12 13:40:56.049785 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049796 | orchestrator | 2025-07-12 13:40:56.049807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:56.049818 | orchestrator | Saturday 12 July 2025 13:40:55 +0000 (0:00:00.199) 0:00:06.919 ********* 2025-07-12 13:40:56.049828 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049839 | orchestrator | 2025-07-12 13:40:56.049850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:56.049861 | orchestrator | Saturday 12 July 2025 13:40:55 +0000 (0:00:00.204) 0:00:07.123 ********* 2025-07-12 13:40:56.049878 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049889 | orchestrator | 2025-07-12 13:40:56.049900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:56.049911 | orchestrator | Saturday 12 July 2025 13:40:55 +0000 (0:00:00.207) 0:00:07.331 ********* 2025-07-12 13:40:56.049922 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:40:56.049932 | orchestrator | 2025-07-12 13:40:56.049943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:40:56.049954 | orchestrator | Saturday 12 July 2025 13:40:55 +0000 (0:00:00.200) 0:00:07.531 ********* 2025-07-12 13:40:56.049971 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.042407 | orchestrator | 2025-07-12 13:41:04.042521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:04.042538 | orchestrator | Saturday 12 July 2025 13:40:56 +0000 (0:00:00.211) 0:00:07.743 ********* 2025-07-12 13:41:04.042550 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 13:41:04.042563 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-12 13:41:04.042648 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-12 13:41:04.042662 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-12 13:41:04.042673 | orchestrator | 2025-07-12 13:41:04.042685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:04.042697 | orchestrator | Saturday 12 July 2025 13:40:57 +0000 (0:00:01.062) 0:00:08.806 ********* 2025-07-12 13:41:04.042709 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.042720 | orchestrator | 2025-07-12 13:41:04.042731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:04.042743 | orchestrator | Saturday 12 July 2025 13:40:57 +0000 (0:00:00.206) 0:00:09.013 ********* 2025-07-12 13:41:04.042754 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.042765 | orchestrator | 2025-07-12 13:41:04.042776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:04.042787 | orchestrator | Saturday 12 July 2025 13:40:57 +0000 (0:00:00.195) 0:00:09.208 ********* 2025-07-12 13:41:04.042799 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.042810 | orchestrator | 2025-07-12 13:41:04.042821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:04.042832 | orchestrator | Saturday 12 July 2025 13:40:57 +0000 (0:00:00.208) 0:00:09.417 ********* 2025-07-12 13:41:04.042843 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.042854 | orchestrator | 2025-07-12 13:41:04.042866 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 13:41:04.042897 | orchestrator | Saturday 12 July 2025 13:40:57 +0000 (0:00:00.197) 0:00:09.615 ********* 2025-07-12 13:41:04.042909 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.042920 | orchestrator | 2025-07-12 13:41:04.042935 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 13:41:04.042948 | orchestrator | Saturday 12 July 2025 13:40:58 +0000 (0:00:00.145) 0:00:09.760 ********* 2025-07-12 13:41:04.042961 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09698b4c-8482-58a0-ad33-d3500ef3a9f7'}}) 2025-07-12 13:41:04.042976 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f35471dc-23d0-5222-b540-93882fae0f69'}}) 2025-07-12 13:41:04.042988 | orchestrator | 2025-07-12 13:41:04.043001 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 13:41:04.043015 | orchestrator | Saturday 12 July 2025 13:40:58 +0000 (0:00:00.196) 0:00:09.956 ********* 2025-07-12 13:41:04.043029 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'}) 2025-07-12 13:41:04.043041 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'}) 2025-07-12 13:41:04.043052 | orchestrator | 2025-07-12 13:41:04.043088 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 13:41:04.043100 | orchestrator | Saturday 12 July 2025 13:41:00 +0000 (0:00:02.057) 0:00:12.014 ********* 2025-07-12 13:41:04.043111 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:04.043123 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:04.043134 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043145 | orchestrator | 2025-07-12 13:41:04.043157 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 13:41:04.043168 | orchestrator | Saturday 12 July 2025 13:41:00 +0000 (0:00:00.153) 0:00:12.167 ********* 2025-07-12 13:41:04.043179 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'}) 2025-07-12 13:41:04.043190 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'}) 2025-07-12 13:41:04.043201 | orchestrator | 2025-07-12 13:41:04.043212 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 13:41:04.043223 | orchestrator | Saturday 12 July 2025 13:41:01 +0000 (0:00:01.460) 0:00:13.628 ********* 2025-07-12 13:41:04.043234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:04.043245 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:04.043256 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043267 | orchestrator | 2025-07-12 13:41:04.043277 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 13:41:04.043288 | orchestrator | Saturday 12 July 2025 13:41:02 +0000 (0:00:00.143) 0:00:13.772 ********* 2025-07-12 13:41:04.043299 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043310 | orchestrator | 2025-07-12 13:41:04.043326 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 13:41:04.043356 | orchestrator | Saturday 12 July 2025 13:41:02 +0000 (0:00:00.138) 0:00:13.910 ********* 2025-07-12 13:41:04.043368 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:04.043379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:04.043390 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043401 | orchestrator | 2025-07-12 13:41:04.043413 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 13:41:04.043424 | orchestrator | Saturday 12 July 2025 13:41:02 +0000 (0:00:00.338) 0:00:14.249 ********* 2025-07-12 13:41:04.043435 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043446 | orchestrator | 2025-07-12 13:41:04.043457 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 13:41:04.043468 | orchestrator | Saturday 12 July 2025 13:41:02 +0000 (0:00:00.163) 0:00:14.413 ********* 2025-07-12 13:41:04.043479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:04.043490 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:04.043501 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043512 | orchestrator | 2025-07-12 13:41:04.043523 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 13:41:04.043542 | orchestrator | Saturday 12 July 2025 13:41:02 +0000 (0:00:00.147) 0:00:14.560 ********* 2025-07-12 13:41:04.043554 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043565 | orchestrator | 2025-07-12 13:41:04.043604 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 13:41:04.043615 | orchestrator | Saturday 12 July 2025 13:41:02 +0000 (0:00:00.136) 0:00:14.697 ********* 2025-07-12 13:41:04.043626 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:04.043638 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:04.043649 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043660 | orchestrator | 2025-07-12 13:41:04.043671 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 13:41:04.043682 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.156) 0:00:14.854 ********* 2025-07-12 13:41:04.043693 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:41:04.043704 | orchestrator | 2025-07-12 13:41:04.043715 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 13:41:04.043726 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.158) 0:00:15.012 ********* 2025-07-12 13:41:04.043737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:04.043748 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:04.043759 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043771 | orchestrator | 2025-07-12 13:41:04.043782 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 13:41:04.043793 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.151) 0:00:15.164 ********* 2025-07-12 13:41:04.043804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:04.043815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:04.043827 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043838 | orchestrator | 2025-07-12 13:41:04.043849 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 13:41:04.043860 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.150) 0:00:15.314 ********* 2025-07-12 13:41:04.043871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:04.043882 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:04.043893 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043904 | orchestrator | 2025-07-12 13:41:04.043915 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 13:41:04.043926 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.150) 0:00:15.464 ********* 2025-07-12 13:41:04.043937 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043948 | orchestrator | 2025-07-12 13:41:04.043959 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 13:41:04.043970 | orchestrator | Saturday 12 July 2025 13:41:03 +0000 (0:00:00.138) 0:00:15.603 ********* 2025-07-12 13:41:04.043981 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:04.043992 | orchestrator | 2025-07-12 13:41:04.044009 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 13:41:10.393125 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.132) 0:00:15.735 ********* 2025-07-12 13:41:10.393235 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.393251 | orchestrator | 2025-07-12 13:41:10.393264 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 13:41:10.393275 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.137) 0:00:15.873 ********* 2025-07-12 13:41:10.393286 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:41:10.393298 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 13:41:10.393310 | orchestrator | } 2025-07-12 13:41:10.393321 | orchestrator | 2025-07-12 13:41:10.393332 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 13:41:10.393343 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.331) 0:00:16.204 ********* 2025-07-12 13:41:10.393354 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:41:10.393365 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 13:41:10.393376 | orchestrator | } 2025-07-12 13:41:10.393387 | orchestrator | 2025-07-12 13:41:10.393398 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 13:41:10.393409 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.145) 0:00:16.349 ********* 2025-07-12 13:41:10.393419 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:41:10.393430 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 13:41:10.393442 | orchestrator | } 2025-07-12 13:41:10.393453 | orchestrator | 2025-07-12 13:41:10.393464 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 13:41:10.393475 | orchestrator | Saturday 12 July 2025 13:41:04 +0000 (0:00:00.147) 0:00:16.497 ********* 2025-07-12 13:41:10.393486 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:41:10.393497 | orchestrator | 2025-07-12 13:41:10.393508 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 13:41:10.393519 | orchestrator | Saturday 12 July 2025 13:41:05 +0000 (0:00:00.683) 0:00:17.181 ********* 2025-07-12 13:41:10.393530 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:41:10.393540 | orchestrator | 2025-07-12 13:41:10.393552 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 13:41:10.393590 | orchestrator | Saturday 12 July 2025 13:41:05 +0000 (0:00:00.510) 0:00:17.691 ********* 2025-07-12 13:41:10.393602 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:41:10.393612 | orchestrator | 2025-07-12 13:41:10.393623 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 13:41:10.393634 | orchestrator | Saturday 12 July 2025 13:41:06 +0000 (0:00:00.541) 0:00:18.232 ********* 2025-07-12 13:41:10.393645 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:41:10.393657 | orchestrator | 2025-07-12 13:41:10.393669 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 13:41:10.393681 | orchestrator | Saturday 12 July 2025 13:41:06 +0000 (0:00:00.136) 0:00:18.369 ********* 2025-07-12 13:41:10.393693 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.393705 | orchestrator | 2025-07-12 13:41:10.393718 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 13:41:10.393730 | orchestrator | Saturday 12 July 2025 13:41:06 +0000 (0:00:00.120) 0:00:18.490 ********* 2025-07-12 13:41:10.393742 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.393753 | orchestrator | 2025-07-12 13:41:10.393765 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 13:41:10.393777 | orchestrator | Saturday 12 July 2025 13:41:06 +0000 (0:00:00.107) 0:00:18.598 ********* 2025-07-12 13:41:10.393788 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:41:10.393801 | orchestrator |  "vgs_report": { 2025-07-12 13:41:10.393813 | orchestrator |  "vg": [] 2025-07-12 13:41:10.393825 | orchestrator |  } 2025-07-12 13:41:10.393837 | orchestrator | } 2025-07-12 13:41:10.393849 | orchestrator | 2025-07-12 13:41:10.393861 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 13:41:10.393900 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.142) 0:00:18.741 ********* 2025-07-12 13:41:10.393913 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.393925 | orchestrator | 2025-07-12 13:41:10.393954 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 13:41:10.393966 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.145) 0:00:18.886 ********* 2025-07-12 13:41:10.393977 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.393988 | orchestrator | 2025-07-12 13:41:10.393999 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 13:41:10.394010 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.119) 0:00:19.005 ********* 2025-07-12 13:41:10.394083 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394094 | orchestrator | 2025-07-12 13:41:10.394105 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 13:41:10.394116 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.394) 0:00:19.400 ********* 2025-07-12 13:41:10.394127 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394138 | orchestrator | 2025-07-12 13:41:10.394149 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 13:41:10.394159 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.126) 0:00:19.527 ********* 2025-07-12 13:41:10.394170 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394181 | orchestrator | 2025-07-12 13:41:10.394192 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 13:41:10.394203 | orchestrator | Saturday 12 July 2025 13:41:07 +0000 (0:00:00.134) 0:00:19.661 ********* 2025-07-12 13:41:10.394214 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394225 | orchestrator | 2025-07-12 13:41:10.394236 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 13:41:10.394247 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.138) 0:00:19.799 ********* 2025-07-12 13:41:10.394258 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394268 | orchestrator | 2025-07-12 13:41:10.394280 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 13:41:10.394290 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.137) 0:00:19.936 ********* 2025-07-12 13:41:10.394301 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394312 | orchestrator | 2025-07-12 13:41:10.394331 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 13:41:10.394361 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.137) 0:00:20.073 ********* 2025-07-12 13:41:10.394373 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394384 | orchestrator | 2025-07-12 13:41:10.394395 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 13:41:10.394405 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.156) 0:00:20.230 ********* 2025-07-12 13:41:10.394416 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394427 | orchestrator | 2025-07-12 13:41:10.394437 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 13:41:10.394448 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.142) 0:00:20.373 ********* 2025-07-12 13:41:10.394459 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394469 | orchestrator | 2025-07-12 13:41:10.394480 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 13:41:10.394491 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.142) 0:00:20.515 ********* 2025-07-12 13:41:10.394502 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394513 | orchestrator | 2025-07-12 13:41:10.394523 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 13:41:10.394534 | orchestrator | Saturday 12 July 2025 13:41:08 +0000 (0:00:00.145) 0:00:20.660 ********* 2025-07-12 13:41:10.394545 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394575 | orchestrator | 2025-07-12 13:41:10.394586 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 13:41:10.394606 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.139) 0:00:20.800 ********* 2025-07-12 13:41:10.394617 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394627 | orchestrator | 2025-07-12 13:41:10.394638 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 13:41:10.394649 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.127) 0:00:20.927 ********* 2025-07-12 13:41:10.394662 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:10.394674 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:10.394685 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394696 | orchestrator | 2025-07-12 13:41:10.394706 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 13:41:10.394717 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.153) 0:00:21.081 ********* 2025-07-12 13:41:10.394728 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:10.394739 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:10.394750 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394761 | orchestrator | 2025-07-12 13:41:10.394771 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 13:41:10.394782 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.337) 0:00:21.418 ********* 2025-07-12 13:41:10.394793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:10.394804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:10.394815 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394826 | orchestrator | 2025-07-12 13:41:10.394836 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 13:41:10.394847 | orchestrator | Saturday 12 July 2025 13:41:09 +0000 (0:00:00.151) 0:00:21.570 ********* 2025-07-12 13:41:10.394858 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:10.394868 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:10.394879 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394890 | orchestrator | 2025-07-12 13:41:10.394901 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 13:41:10.394911 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.175) 0:00:21.746 ********* 2025-07-12 13:41:10.394922 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:10.394933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:10.394944 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:10.394954 | orchestrator | 2025-07-12 13:41:10.394965 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 13:41:10.394976 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.193) 0:00:21.939 ********* 2025-07-12 13:41:10.394992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:10.395017 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:15.843074 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:15.843175 | orchestrator | 2025-07-12 13:41:15.843191 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 13:41:15.843205 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.151) 0:00:22.091 ********* 2025-07-12 13:41:15.843217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:15.843229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:15.843240 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:15.843251 | orchestrator | 2025-07-12 13:41:15.843263 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 13:41:15.843274 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.155) 0:00:22.246 ********* 2025-07-12 13:41:15.843284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:15.843295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:15.843306 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:15.843317 | orchestrator | 2025-07-12 13:41:15.843328 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 13:41:15.843339 | orchestrator | Saturday 12 July 2025 13:41:10 +0000 (0:00:00.150) 0:00:22.397 ********* 2025-07-12 13:41:15.843350 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:41:15.843361 | orchestrator | 2025-07-12 13:41:15.843373 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 13:41:15.843383 | orchestrator | Saturday 12 July 2025 13:41:11 +0000 (0:00:00.526) 0:00:22.923 ********* 2025-07-12 13:41:15.843394 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:41:15.843405 | orchestrator | 2025-07-12 13:41:15.843416 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 13:41:15.843427 | orchestrator | Saturday 12 July 2025 13:41:11 +0000 (0:00:00.574) 0:00:23.498 ********* 2025-07-12 13:41:15.843438 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:41:15.843448 | orchestrator | 2025-07-12 13:41:15.843459 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 13:41:15.843470 | orchestrator | Saturday 12 July 2025 13:41:11 +0000 (0:00:00.146) 0:00:23.644 ********* 2025-07-12 13:41:15.843481 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'vg_name': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'}) 2025-07-12 13:41:15.843494 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'vg_name': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'}) 2025-07-12 13:41:15.843504 | orchestrator | 2025-07-12 13:41:15.843516 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 13:41:15.843528 | orchestrator | Saturday 12 July 2025 13:41:12 +0000 (0:00:00.166) 0:00:23.811 ********* 2025-07-12 13:41:15.843566 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:15.843580 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:15.843593 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:15.843605 | orchestrator | 2025-07-12 13:41:15.843617 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 13:41:15.843661 | orchestrator | Saturday 12 July 2025 13:41:12 +0000 (0:00:00.154) 0:00:23.966 ********* 2025-07-12 13:41:15.843674 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:15.843686 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:15.843699 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:15.843712 | orchestrator | 2025-07-12 13:41:15.843724 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 13:41:15.843737 | orchestrator | Saturday 12 July 2025 13:41:12 +0000 (0:00:00.360) 0:00:24.327 ********* 2025-07-12 13:41:15.843749 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'})  2025-07-12 13:41:15.843761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'})  2025-07-12 13:41:15.843774 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:41:15.843786 | orchestrator | 2025-07-12 13:41:15.843799 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 13:41:15.843812 | orchestrator | Saturday 12 July 2025 13:41:12 +0000 (0:00:00.162) 0:00:24.489 ********* 2025-07-12 13:41:15.843825 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 13:41:15.843837 | orchestrator |  "lvm_report": { 2025-07-12 13:41:15.843850 | orchestrator |  "lv": [ 2025-07-12 13:41:15.843863 | orchestrator |  { 2025-07-12 13:41:15.843893 | orchestrator |  "lv_name": "osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7", 2025-07-12 13:41:15.843905 | orchestrator |  "vg_name": "ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7" 2025-07-12 13:41:15.843916 | orchestrator |  }, 2025-07-12 13:41:15.843927 | orchestrator |  { 2025-07-12 13:41:15.843938 | orchestrator |  "lv_name": "osd-block-f35471dc-23d0-5222-b540-93882fae0f69", 2025-07-12 13:41:15.843949 | orchestrator |  "vg_name": "ceph-f35471dc-23d0-5222-b540-93882fae0f69" 2025-07-12 13:41:15.843960 | orchestrator |  } 2025-07-12 13:41:15.843971 | orchestrator |  ], 2025-07-12 13:41:15.843982 | orchestrator |  "pv": [ 2025-07-12 13:41:15.843992 | orchestrator |  { 2025-07-12 13:41:15.844003 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 13:41:15.844014 | orchestrator |  "vg_name": "ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7" 2025-07-12 13:41:15.844025 | orchestrator |  }, 2025-07-12 13:41:15.844036 | orchestrator |  { 2025-07-12 13:41:15.844047 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 13:41:15.844058 | orchestrator |  "vg_name": "ceph-f35471dc-23d0-5222-b540-93882fae0f69" 2025-07-12 13:41:15.844068 | orchestrator |  } 2025-07-12 13:41:15.844079 | orchestrator |  ] 2025-07-12 13:41:15.844090 | orchestrator |  } 2025-07-12 13:41:15.844101 | orchestrator | } 2025-07-12 13:41:15.844112 | orchestrator | 2025-07-12 13:41:15.844123 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 13:41:15.844134 | orchestrator | 2025-07-12 13:41:15.844162 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:41:15.844173 | orchestrator | Saturday 12 July 2025 13:41:13 +0000 (0:00:00.281) 0:00:24.771 ********* 2025-07-12 13:41:15.844185 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 13:41:15.844196 | orchestrator | 2025-07-12 13:41:15.844207 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:41:15.844218 | orchestrator | Saturday 12 July 2025 13:41:13 +0000 (0:00:00.260) 0:00:25.031 ********* 2025-07-12 13:41:15.844229 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:15.844240 | orchestrator | 2025-07-12 13:41:15.844251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:15.844270 | orchestrator | Saturday 12 July 2025 13:41:13 +0000 (0:00:00.226) 0:00:25.258 ********* 2025-07-12 13:41:15.844282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 13:41:15.844293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 13:41:15.844304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 13:41:15.844315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 13:41:15.844325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 13:41:15.844336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 13:41:15.844347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 13:41:15.844358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 13:41:15.844369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 13:41:15.844380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 13:41:15.844391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 13:41:15.844402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 13:41:15.844413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 13:41:15.844424 | orchestrator | 2025-07-12 13:41:15.844435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:15.844445 | orchestrator | Saturday 12 July 2025 13:41:13 +0000 (0:00:00.435) 0:00:25.693 ********* 2025-07-12 13:41:15.844456 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:15.844467 | orchestrator | 2025-07-12 13:41:15.844478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:15.844489 | orchestrator | Saturday 12 July 2025 13:41:14 +0000 (0:00:00.202) 0:00:25.896 ********* 2025-07-12 13:41:15.844500 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:15.844511 | orchestrator | 2025-07-12 13:41:15.844522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:15.844533 | orchestrator | Saturday 12 July 2025 13:41:14 +0000 (0:00:00.200) 0:00:26.096 ********* 2025-07-12 13:41:15.844563 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:15.844574 | orchestrator | 2025-07-12 13:41:15.844585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:15.844596 | orchestrator | Saturday 12 July 2025 13:41:14 +0000 (0:00:00.195) 0:00:26.292 ********* 2025-07-12 13:41:15.844607 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:15.844617 | orchestrator | 2025-07-12 13:41:15.844628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:15.844639 | orchestrator | Saturday 12 July 2025 13:41:15 +0000 (0:00:00.631) 0:00:26.923 ********* 2025-07-12 13:41:15.844650 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:15.844661 | orchestrator | 2025-07-12 13:41:15.844672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:15.844682 | orchestrator | Saturday 12 July 2025 13:41:15 +0000 (0:00:00.211) 0:00:27.135 ********* 2025-07-12 13:41:15.844699 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:15.844710 | orchestrator | 2025-07-12 13:41:15.844721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:15.844732 | orchestrator | Saturday 12 July 2025 13:41:15 +0000 (0:00:00.203) 0:00:27.339 ********* 2025-07-12 13:41:15.844743 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:15.844754 | orchestrator | 2025-07-12 13:41:15.844771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:26.174625 | orchestrator | Saturday 12 July 2025 13:41:15 +0000 (0:00:00.198) 0:00:27.538 ********* 2025-07-12 13:41:26.174787 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.174805 | orchestrator | 2025-07-12 13:41:26.174817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:26.174829 | orchestrator | Saturday 12 July 2025 13:41:16 +0000 (0:00:00.188) 0:00:27.726 ********* 2025-07-12 13:41:26.174840 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041) 2025-07-12 13:41:26.174853 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041) 2025-07-12 13:41:26.174864 | orchestrator | 2025-07-12 13:41:26.174876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:26.174887 | orchestrator | Saturday 12 July 2025 13:41:16 +0000 (0:00:00.432) 0:00:28.158 ********* 2025-07-12 13:41:26.174898 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98) 2025-07-12 13:41:26.174909 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98) 2025-07-12 13:41:26.174920 | orchestrator | 2025-07-12 13:41:26.174931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:26.174942 | orchestrator | Saturday 12 July 2025 13:41:16 +0000 (0:00:00.445) 0:00:28.603 ********* 2025-07-12 13:41:26.174953 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa) 2025-07-12 13:41:26.174965 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa) 2025-07-12 13:41:26.174976 | orchestrator | 2025-07-12 13:41:26.174987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:26.174998 | orchestrator | Saturday 12 July 2025 13:41:17 +0000 (0:00:00.460) 0:00:29.064 ********* 2025-07-12 13:41:26.175009 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1) 2025-07-12 13:41:26.175020 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1) 2025-07-12 13:41:26.175031 | orchestrator | 2025-07-12 13:41:26.175042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:26.175053 | orchestrator | Saturday 12 July 2025 13:41:17 +0000 (0:00:00.437) 0:00:29.501 ********* 2025-07-12 13:41:26.175064 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:41:26.175077 | orchestrator | 2025-07-12 13:41:26.175089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175101 | orchestrator | Saturday 12 July 2025 13:41:18 +0000 (0:00:00.377) 0:00:29.879 ********* 2025-07-12 13:41:26.175113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 13:41:26.175126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 13:41:26.175138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 13:41:26.175149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 13:41:26.175160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 13:41:26.175171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 13:41:26.175182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 13:41:26.175193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 13:41:26.175204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 13:41:26.175215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 13:41:26.175250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 13:41:26.175262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 13:41:26.175273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 13:41:26.175284 | orchestrator | 2025-07-12 13:41:26.175295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175306 | orchestrator | Saturday 12 July 2025 13:41:18 +0000 (0:00:00.632) 0:00:30.511 ********* 2025-07-12 13:41:26.175317 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175327 | orchestrator | 2025-07-12 13:41:26.175338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175349 | orchestrator | Saturday 12 July 2025 13:41:19 +0000 (0:00:00.211) 0:00:30.723 ********* 2025-07-12 13:41:26.175360 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175371 | orchestrator | 2025-07-12 13:41:26.175382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175407 | orchestrator | Saturday 12 July 2025 13:41:19 +0000 (0:00:00.197) 0:00:30.920 ********* 2025-07-12 13:41:26.175418 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175429 | orchestrator | 2025-07-12 13:41:26.175440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175451 | orchestrator | Saturday 12 July 2025 13:41:19 +0000 (0:00:00.193) 0:00:31.113 ********* 2025-07-12 13:41:26.175462 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175473 | orchestrator | 2025-07-12 13:41:26.175503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175515 | orchestrator | Saturday 12 July 2025 13:41:19 +0000 (0:00:00.207) 0:00:31.321 ********* 2025-07-12 13:41:26.175563 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175574 | orchestrator | 2025-07-12 13:41:26.175585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175596 | orchestrator | Saturday 12 July 2025 13:41:19 +0000 (0:00:00.188) 0:00:31.509 ********* 2025-07-12 13:41:26.175607 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175617 | orchestrator | 2025-07-12 13:41:26.175628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175639 | orchestrator | Saturday 12 July 2025 13:41:20 +0000 (0:00:00.256) 0:00:31.766 ********* 2025-07-12 13:41:26.175650 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175660 | orchestrator | 2025-07-12 13:41:26.175671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175682 | orchestrator | Saturday 12 July 2025 13:41:20 +0000 (0:00:00.201) 0:00:31.968 ********* 2025-07-12 13:41:26.175693 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175703 | orchestrator | 2025-07-12 13:41:26.175714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175725 | orchestrator | Saturday 12 July 2025 13:41:20 +0000 (0:00:00.201) 0:00:32.170 ********* 2025-07-12 13:41:26.175736 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 13:41:26.175747 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-12 13:41:26.175758 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-12 13:41:26.175768 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-12 13:41:26.175779 | orchestrator | 2025-07-12 13:41:26.175790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175801 | orchestrator | Saturday 12 July 2025 13:41:21 +0000 (0:00:00.840) 0:00:33.010 ********* 2025-07-12 13:41:26.175811 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175822 | orchestrator | 2025-07-12 13:41:26.175833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175844 | orchestrator | Saturday 12 July 2025 13:41:21 +0000 (0:00:00.205) 0:00:33.216 ********* 2025-07-12 13:41:26.175854 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175874 | orchestrator | 2025-07-12 13:41:26.175885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175896 | orchestrator | Saturday 12 July 2025 13:41:21 +0000 (0:00:00.195) 0:00:33.411 ********* 2025-07-12 13:41:26.175907 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175917 | orchestrator | 2025-07-12 13:41:26.175928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:26.175939 | orchestrator | Saturday 12 July 2025 13:41:22 +0000 (0:00:00.664) 0:00:34.075 ********* 2025-07-12 13:41:26.175950 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.175961 | orchestrator | 2025-07-12 13:41:26.175972 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 13:41:26.175983 | orchestrator | Saturday 12 July 2025 13:41:22 +0000 (0:00:00.197) 0:00:34.273 ********* 2025-07-12 13:41:26.175994 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.176004 | orchestrator | 2025-07-12 13:41:26.176015 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 13:41:26.176026 | orchestrator | Saturday 12 July 2025 13:41:22 +0000 (0:00:00.147) 0:00:34.420 ********* 2025-07-12 13:41:26.176037 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f88c8806-82e1-5c41-a829-e62dc4a8fdb6'}}) 2025-07-12 13:41:26.176048 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fbedf305-2fae-5605-926c-96a21a5245d1'}}) 2025-07-12 13:41:26.176058 | orchestrator | 2025-07-12 13:41:26.176069 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 13:41:26.176080 | orchestrator | Saturday 12 July 2025 13:41:22 +0000 (0:00:00.200) 0:00:34.620 ********* 2025-07-12 13:41:26.176092 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'}) 2025-07-12 13:41:26.176104 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'}) 2025-07-12 13:41:26.176115 | orchestrator | 2025-07-12 13:41:26.176125 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 13:41:26.176136 | orchestrator | Saturday 12 July 2025 13:41:24 +0000 (0:00:01.812) 0:00:36.433 ********* 2025-07-12 13:41:26.176147 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:26.176159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:26.176170 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:26.176181 | orchestrator | 2025-07-12 13:41:26.176192 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 13:41:26.176202 | orchestrator | Saturday 12 July 2025 13:41:24 +0000 (0:00:00.152) 0:00:36.585 ********* 2025-07-12 13:41:26.176213 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'}) 2025-07-12 13:41:26.176224 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'}) 2025-07-12 13:41:26.176235 | orchestrator | 2025-07-12 13:41:26.176253 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 13:41:31.677630 | orchestrator | Saturday 12 July 2025 13:41:26 +0000 (0:00:01.277) 0:00:37.863 ********* 2025-07-12 13:41:31.677748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:31.677766 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:31.677806 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.677819 | orchestrator | 2025-07-12 13:41:31.677831 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 13:41:31.677842 | orchestrator | Saturday 12 July 2025 13:41:26 +0000 (0:00:00.152) 0:00:38.016 ********* 2025-07-12 13:41:31.677853 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.677864 | orchestrator | 2025-07-12 13:41:31.677892 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 13:41:31.677904 | orchestrator | Saturday 12 July 2025 13:41:26 +0000 (0:00:00.128) 0:00:38.145 ********* 2025-07-12 13:41:31.677915 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:31.677926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:31.677937 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.677948 | orchestrator | 2025-07-12 13:41:31.677959 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 13:41:31.677970 | orchestrator | Saturday 12 July 2025 13:41:26 +0000 (0:00:00.146) 0:00:38.291 ********* 2025-07-12 13:41:31.677981 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.677992 | orchestrator | 2025-07-12 13:41:31.678002 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 13:41:31.678013 | orchestrator | Saturday 12 July 2025 13:41:26 +0000 (0:00:00.127) 0:00:38.419 ********* 2025-07-12 13:41:31.678081 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:31.678093 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:31.678104 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.678114 | orchestrator | 2025-07-12 13:41:31.678127 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 13:41:31.678139 | orchestrator | Saturday 12 July 2025 13:41:26 +0000 (0:00:00.146) 0:00:38.565 ********* 2025-07-12 13:41:31.678151 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.678163 | orchestrator | 2025-07-12 13:41:31.678174 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 13:41:31.678186 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.336) 0:00:38.901 ********* 2025-07-12 13:41:31.678199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:31.678211 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:31.678223 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.678235 | orchestrator | 2025-07-12 13:41:31.678247 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 13:41:31.678259 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.168) 0:00:39.070 ********* 2025-07-12 13:41:31.678272 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:31.678285 | orchestrator | 2025-07-12 13:41:31.678297 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 13:41:31.678309 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.137) 0:00:39.208 ********* 2025-07-12 13:41:31.678320 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:31.678332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:31.678354 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.678366 | orchestrator | 2025-07-12 13:41:31.678378 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 13:41:31.678390 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.155) 0:00:39.363 ********* 2025-07-12 13:41:31.678402 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:31.678420 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:31.678434 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.678446 | orchestrator | 2025-07-12 13:41:31.678458 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 13:41:31.678471 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.151) 0:00:39.515 ********* 2025-07-12 13:41:31.678499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:31.678544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:31.678556 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.678567 | orchestrator | 2025-07-12 13:41:31.678577 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 13:41:31.678588 | orchestrator | Saturday 12 July 2025 13:41:27 +0000 (0:00:00.147) 0:00:39.662 ********* 2025-07-12 13:41:31.678599 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.678610 | orchestrator | 2025-07-12 13:41:31.678620 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 13:41:31.678631 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.138) 0:00:39.801 ********* 2025-07-12 13:41:31.678642 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.678653 | orchestrator | 2025-07-12 13:41:31.678663 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 13:41:31.678674 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.149) 0:00:39.950 ********* 2025-07-12 13:41:31.678685 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.678696 | orchestrator | 2025-07-12 13:41:31.678706 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 13:41:31.678717 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.145) 0:00:40.096 ********* 2025-07-12 13:41:31.678728 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:31.678739 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 13:41:31.678750 | orchestrator | } 2025-07-12 13:41:31.678761 | orchestrator | 2025-07-12 13:41:31.678772 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 13:41:31.678782 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.144) 0:00:40.241 ********* 2025-07-12 13:41:31.678793 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:31.678804 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 13:41:31.678815 | orchestrator | } 2025-07-12 13:41:31.678825 | orchestrator | 2025-07-12 13:41:31.678836 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 13:41:31.678847 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.141) 0:00:40.383 ********* 2025-07-12 13:41:31.678858 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:31.678869 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 13:41:31.678881 | orchestrator | } 2025-07-12 13:41:31.678891 | orchestrator | 2025-07-12 13:41:31.678902 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 13:41:31.678913 | orchestrator | Saturday 12 July 2025 13:41:28 +0000 (0:00:00.145) 0:00:40.528 ********* 2025-07-12 13:41:31.678924 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:31.678934 | orchestrator | 2025-07-12 13:41:31.678953 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 13:41:31.678964 | orchestrator | Saturday 12 July 2025 13:41:29 +0000 (0:00:00.732) 0:00:41.261 ********* 2025-07-12 13:41:31.678974 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:31.678985 | orchestrator | 2025-07-12 13:41:31.678996 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 13:41:31.679007 | orchestrator | Saturday 12 July 2025 13:41:30 +0000 (0:00:00.495) 0:00:41.757 ********* 2025-07-12 13:41:31.679018 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:31.679028 | orchestrator | 2025-07-12 13:41:31.679039 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 13:41:31.679050 | orchestrator | Saturday 12 July 2025 13:41:30 +0000 (0:00:00.523) 0:00:42.280 ********* 2025-07-12 13:41:31.679061 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:31.679071 | orchestrator | 2025-07-12 13:41:31.679082 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 13:41:31.679093 | orchestrator | Saturday 12 July 2025 13:41:30 +0000 (0:00:00.144) 0:00:42.424 ********* 2025-07-12 13:41:31.679104 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.679115 | orchestrator | 2025-07-12 13:41:31.679125 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 13:41:31.679136 | orchestrator | Saturday 12 July 2025 13:41:30 +0000 (0:00:00.115) 0:00:42.540 ********* 2025-07-12 13:41:31.679147 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.679158 | orchestrator | 2025-07-12 13:41:31.679168 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 13:41:31.679179 | orchestrator | Saturday 12 July 2025 13:41:30 +0000 (0:00:00.105) 0:00:42.645 ********* 2025-07-12 13:41:31.679190 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:31.679201 | orchestrator |  "vgs_report": { 2025-07-12 13:41:31.679212 | orchestrator |  "vg": [] 2025-07-12 13:41:31.679223 | orchestrator |  } 2025-07-12 13:41:31.679233 | orchestrator | } 2025-07-12 13:41:31.679244 | orchestrator | 2025-07-12 13:41:31.679255 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 13:41:31.679266 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.139) 0:00:42.785 ********* 2025-07-12 13:41:31.679277 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.679287 | orchestrator | 2025-07-12 13:41:31.679298 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 13:41:31.679309 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.155) 0:00:42.941 ********* 2025-07-12 13:41:31.679320 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.679330 | orchestrator | 2025-07-12 13:41:31.679341 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 13:41:31.679352 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.146) 0:00:43.087 ********* 2025-07-12 13:41:31.679367 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.679379 | orchestrator | 2025-07-12 13:41:31.679390 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 13:41:31.679400 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.134) 0:00:43.221 ********* 2025-07-12 13:41:31.679411 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:31.679422 | orchestrator | 2025-07-12 13:41:31.679433 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 13:41:31.679451 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.150) 0:00:43.372 ********* 2025-07-12 13:41:36.314382 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314540 | orchestrator | 2025-07-12 13:41:36.314559 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 13:41:36.314572 | orchestrator | Saturday 12 July 2025 13:41:31 +0000 (0:00:00.136) 0:00:43.509 ********* 2025-07-12 13:41:36.314584 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314596 | orchestrator | 2025-07-12 13:41:36.314607 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 13:41:36.314645 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.326) 0:00:43.835 ********* 2025-07-12 13:41:36.314656 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314668 | orchestrator | 2025-07-12 13:41:36.314679 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 13:41:36.314690 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.145) 0:00:43.980 ********* 2025-07-12 13:41:36.314701 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314712 | orchestrator | 2025-07-12 13:41:36.314723 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 13:41:36.314734 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.128) 0:00:44.109 ********* 2025-07-12 13:41:36.314745 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314756 | orchestrator | 2025-07-12 13:41:36.314767 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 13:41:36.314778 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.137) 0:00:44.246 ********* 2025-07-12 13:41:36.314790 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314801 | orchestrator | 2025-07-12 13:41:36.314812 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 13:41:36.314823 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.142) 0:00:44.389 ********* 2025-07-12 13:41:36.314834 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314845 | orchestrator | 2025-07-12 13:41:36.314856 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 13:41:36.314867 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.135) 0:00:44.525 ********* 2025-07-12 13:41:36.314878 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314889 | orchestrator | 2025-07-12 13:41:36.314901 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 13:41:36.314914 | orchestrator | Saturday 12 July 2025 13:41:32 +0000 (0:00:00.138) 0:00:44.663 ********* 2025-07-12 13:41:36.314926 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314939 | orchestrator | 2025-07-12 13:41:36.314951 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 13:41:36.314963 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.127) 0:00:44.790 ********* 2025-07-12 13:41:36.314975 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.314987 | orchestrator | 2025-07-12 13:41:36.314999 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 13:41:36.315011 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.138) 0:00:44.928 ********* 2025-07-12 13:41:36.315024 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315038 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315050 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.315063 | orchestrator | 2025-07-12 13:41:36.315075 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 13:41:36.315088 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.150) 0:00:45.079 ********* 2025-07-12 13:41:36.315100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315113 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315125 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.315138 | orchestrator | 2025-07-12 13:41:36.315149 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 13:41:36.315162 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.170) 0:00:45.250 ********* 2025-07-12 13:41:36.315173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315193 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315205 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.315217 | orchestrator | 2025-07-12 13:41:36.315230 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 13:41:36.315242 | orchestrator | Saturday 12 July 2025 13:41:33 +0000 (0:00:00.146) 0:00:45.396 ********* 2025-07-12 13:41:36.315255 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315278 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.315289 | orchestrator | 2025-07-12 13:41:36.315300 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 13:41:36.315330 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.346) 0:00:45.743 ********* 2025-07-12 13:41:36.315341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315352 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315364 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.315375 | orchestrator | 2025-07-12 13:41:36.315385 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 13:41:36.315396 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.158) 0:00:45.902 ********* 2025-07-12 13:41:36.315407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315429 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.315440 | orchestrator | 2025-07-12 13:41:36.315451 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 13:41:36.315462 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.152) 0:00:46.054 ********* 2025-07-12 13:41:36.315473 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315526 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.315537 | orchestrator | 2025-07-12 13:41:36.315549 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 13:41:36.315560 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.161) 0:00:46.215 ********* 2025-07-12 13:41:36.315571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315642 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.315655 | orchestrator | 2025-07-12 13:41:36.315667 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 13:41:36.315678 | orchestrator | Saturday 12 July 2025 13:41:34 +0000 (0:00:00.157) 0:00:46.372 ********* 2025-07-12 13:41:36.315697 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:36.315708 | orchestrator | 2025-07-12 13:41:36.315719 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 13:41:36.315730 | orchestrator | Saturday 12 July 2025 13:41:35 +0000 (0:00:00.508) 0:00:46.881 ********* 2025-07-12 13:41:36.315741 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:36.315752 | orchestrator | 2025-07-12 13:41:36.315763 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 13:41:36.315774 | orchestrator | Saturday 12 July 2025 13:41:35 +0000 (0:00:00.504) 0:00:47.386 ********* 2025-07-12 13:41:36.315784 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:41:36.315795 | orchestrator | 2025-07-12 13:41:36.315806 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 13:41:36.315817 | orchestrator | Saturday 12 July 2025 13:41:35 +0000 (0:00:00.149) 0:00:47.535 ********* 2025-07-12 13:41:36.315828 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'vg_name': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'}) 2025-07-12 13:41:36.315840 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'vg_name': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'}) 2025-07-12 13:41:36.315851 | orchestrator | 2025-07-12 13:41:36.315861 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 13:41:36.315872 | orchestrator | Saturday 12 July 2025 13:41:35 +0000 (0:00:00.159) 0:00:47.694 ********* 2025-07-12 13:41:36.315883 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315894 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315905 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:36.315916 | orchestrator | 2025-07-12 13:41:36.315926 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 13:41:36.315942 | orchestrator | Saturday 12 July 2025 13:41:36 +0000 (0:00:00.159) 0:00:47.853 ********* 2025-07-12 13:41:36.315953 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:36.315964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:36.315983 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:42.302884 | orchestrator | 2025-07-12 13:41:42.303013 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 13:41:42.303031 | orchestrator | Saturday 12 July 2025 13:41:36 +0000 (0:00:00.156) 0:00:48.010 ********* 2025-07-12 13:41:42.303089 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'})  2025-07-12 13:41:42.303104 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'})  2025-07-12 13:41:42.303116 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:41:42.303129 | orchestrator | 2025-07-12 13:41:42.303140 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 13:41:42.303152 | orchestrator | Saturday 12 July 2025 13:41:36 +0000 (0:00:00.148) 0:00:48.158 ********* 2025-07-12 13:41:42.303163 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 13:41:42.303174 | orchestrator |  "lvm_report": { 2025-07-12 13:41:42.303187 | orchestrator |  "lv": [ 2025-07-12 13:41:42.303198 | orchestrator |  { 2025-07-12 13:41:42.303209 | orchestrator |  "lv_name": "osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6", 2025-07-12 13:41:42.303246 | orchestrator |  "vg_name": "ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6" 2025-07-12 13:41:42.303257 | orchestrator |  }, 2025-07-12 13:41:42.303268 | orchestrator |  { 2025-07-12 13:41:42.303279 | orchestrator |  "lv_name": "osd-block-fbedf305-2fae-5605-926c-96a21a5245d1", 2025-07-12 13:41:42.303290 | orchestrator |  "vg_name": "ceph-fbedf305-2fae-5605-926c-96a21a5245d1" 2025-07-12 13:41:42.303301 | orchestrator |  } 2025-07-12 13:41:42.303355 | orchestrator |  ], 2025-07-12 13:41:42.303368 | orchestrator |  "pv": [ 2025-07-12 13:41:42.303380 | orchestrator |  { 2025-07-12 13:41:42.303392 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 13:41:42.303405 | orchestrator |  "vg_name": "ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6" 2025-07-12 13:41:42.303417 | orchestrator |  }, 2025-07-12 13:41:42.303429 | orchestrator |  { 2025-07-12 13:41:42.303442 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 13:41:42.303454 | orchestrator |  "vg_name": "ceph-fbedf305-2fae-5605-926c-96a21a5245d1" 2025-07-12 13:41:42.303466 | orchestrator |  } 2025-07-12 13:41:42.303548 | orchestrator |  ] 2025-07-12 13:41:42.303561 | orchestrator |  } 2025-07-12 13:41:42.303574 | orchestrator | } 2025-07-12 13:41:42.303591 | orchestrator | 2025-07-12 13:41:42.303614 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 13:41:42.303632 | orchestrator | 2025-07-12 13:41:42.303651 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 13:41:42.303670 | orchestrator | Saturday 12 July 2025 13:41:36 +0000 (0:00:00.493) 0:00:48.652 ********* 2025-07-12 13:41:42.303690 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 13:41:42.303713 | orchestrator | 2025-07-12 13:41:42.303734 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 13:41:42.303748 | orchestrator | Saturday 12 July 2025 13:41:37 +0000 (0:00:00.248) 0:00:48.900 ********* 2025-07-12 13:41:42.303759 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:42.303770 | orchestrator | 2025-07-12 13:41:42.303781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.303792 | orchestrator | Saturday 12 July 2025 13:41:37 +0000 (0:00:00.216) 0:00:49.117 ********* 2025-07-12 13:41:42.303803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 13:41:42.303813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 13:41:42.303824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 13:41:42.303835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 13:41:42.303845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 13:41:42.303856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 13:41:42.303867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 13:41:42.303878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 13:41:42.303889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 13:41:42.303900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 13:41:42.303910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 13:41:42.303921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 13:41:42.303932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 13:41:42.303943 | orchestrator | 2025-07-12 13:41:42.303953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.303991 | orchestrator | Saturday 12 July 2025 13:41:37 +0000 (0:00:00.407) 0:00:49.525 ********* 2025-07-12 13:41:42.304003 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:42.304014 | orchestrator | 2025-07-12 13:41:42.304025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304036 | orchestrator | Saturday 12 July 2025 13:41:38 +0000 (0:00:00.196) 0:00:49.721 ********* 2025-07-12 13:41:42.304047 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:42.304058 | orchestrator | 2025-07-12 13:41:42.304069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304099 | orchestrator | Saturday 12 July 2025 13:41:38 +0000 (0:00:00.198) 0:00:49.919 ********* 2025-07-12 13:41:42.304110 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:42.304121 | orchestrator | 2025-07-12 13:41:42.304132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304143 | orchestrator | Saturday 12 July 2025 13:41:38 +0000 (0:00:00.193) 0:00:50.113 ********* 2025-07-12 13:41:42.304154 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:42.304164 | orchestrator | 2025-07-12 13:41:42.304175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304186 | orchestrator | Saturday 12 July 2025 13:41:38 +0000 (0:00:00.208) 0:00:50.321 ********* 2025-07-12 13:41:42.304196 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:42.304207 | orchestrator | 2025-07-12 13:41:42.304218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304228 | orchestrator | Saturday 12 July 2025 13:41:38 +0000 (0:00:00.205) 0:00:50.527 ********* 2025-07-12 13:41:42.304239 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:42.304250 | orchestrator | 2025-07-12 13:41:42.304261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304271 | orchestrator | Saturday 12 July 2025 13:41:39 +0000 (0:00:00.610) 0:00:51.138 ********* 2025-07-12 13:41:42.304282 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:42.304293 | orchestrator | 2025-07-12 13:41:42.304304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304314 | orchestrator | Saturday 12 July 2025 13:41:39 +0000 (0:00:00.222) 0:00:51.360 ********* 2025-07-12 13:41:42.304325 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:42.304336 | orchestrator | 2025-07-12 13:41:42.304347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304358 | orchestrator | Saturday 12 July 2025 13:41:39 +0000 (0:00:00.212) 0:00:51.572 ********* 2025-07-12 13:41:42.304369 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92) 2025-07-12 13:41:42.304381 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92) 2025-07-12 13:41:42.304392 | orchestrator | 2025-07-12 13:41:42.304403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304413 | orchestrator | Saturday 12 July 2025 13:41:40 +0000 (0:00:00.426) 0:00:51.999 ********* 2025-07-12 13:41:42.304424 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094) 2025-07-12 13:41:42.304435 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094) 2025-07-12 13:41:42.304446 | orchestrator | 2025-07-12 13:41:42.304456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304467 | orchestrator | Saturday 12 July 2025 13:41:40 +0000 (0:00:00.429) 0:00:52.428 ********* 2025-07-12 13:41:42.304497 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f) 2025-07-12 13:41:42.304509 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f) 2025-07-12 13:41:42.304520 | orchestrator | 2025-07-12 13:41:42.304530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304550 | orchestrator | Saturday 12 July 2025 13:41:41 +0000 (0:00:00.410) 0:00:52.839 ********* 2025-07-12 13:41:42.304561 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29) 2025-07-12 13:41:42.304572 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29) 2025-07-12 13:41:42.304582 | orchestrator | 2025-07-12 13:41:42.304593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 13:41:42.304603 | orchestrator | Saturday 12 July 2025 13:41:41 +0000 (0:00:00.417) 0:00:53.256 ********* 2025-07-12 13:41:42.304614 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 13:41:42.304625 | orchestrator | 2025-07-12 13:41:42.304636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:42.304646 | orchestrator | Saturday 12 July 2025 13:41:41 +0000 (0:00:00.336) 0:00:53.593 ********* 2025-07-12 13:41:42.304657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 13:41:42.304668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 13:41:42.304678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 13:41:42.304689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 13:41:42.304700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 13:41:42.304710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 13:41:42.304721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 13:41:42.304739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 13:41:42.304758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 13:41:42.304774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 13:41:42.304790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 13:41:42.304817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 13:41:51.278432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 13:41:51.278587 | orchestrator | 2025-07-12 13:41:51.278605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.278618 | orchestrator | Saturday 12 July 2025 13:41:42 +0000 (0:00:00.395) 0:00:53.988 ********* 2025-07-12 13:41:51.278630 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.278642 | orchestrator | 2025-07-12 13:41:51.278654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.278666 | orchestrator | Saturday 12 July 2025 13:41:42 +0000 (0:00:00.186) 0:00:54.175 ********* 2025-07-12 13:41:51.278677 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.278689 | orchestrator | 2025-07-12 13:41:51.278699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.278711 | orchestrator | Saturday 12 July 2025 13:41:42 +0000 (0:00:00.220) 0:00:54.396 ********* 2025-07-12 13:41:51.278722 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.278732 | orchestrator | 2025-07-12 13:41:51.278744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.278755 | orchestrator | Saturday 12 July 2025 13:41:43 +0000 (0:00:00.602) 0:00:54.998 ********* 2025-07-12 13:41:51.278766 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.278777 | orchestrator | 2025-07-12 13:41:51.278788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.278799 | orchestrator | Saturday 12 July 2025 13:41:43 +0000 (0:00:00.216) 0:00:55.215 ********* 2025-07-12 13:41:51.278838 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.278849 | orchestrator | 2025-07-12 13:41:51.278860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.278871 | orchestrator | Saturday 12 July 2025 13:41:43 +0000 (0:00:00.200) 0:00:55.415 ********* 2025-07-12 13:41:51.278882 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.278892 | orchestrator | 2025-07-12 13:41:51.278903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.278914 | orchestrator | Saturday 12 July 2025 13:41:43 +0000 (0:00:00.209) 0:00:55.625 ********* 2025-07-12 13:41:51.278927 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.278938 | orchestrator | 2025-07-12 13:41:51.278950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.278963 | orchestrator | Saturday 12 July 2025 13:41:44 +0000 (0:00:00.211) 0:00:55.837 ********* 2025-07-12 13:41:51.278976 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.278988 | orchestrator | 2025-07-12 13:41:51.279000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.279012 | orchestrator | Saturday 12 July 2025 13:41:44 +0000 (0:00:00.208) 0:00:56.045 ********* 2025-07-12 13:41:51.279024 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 13:41:51.279037 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-12 13:41:51.279051 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-12 13:41:51.279063 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-12 13:41:51.279076 | orchestrator | 2025-07-12 13:41:51.279086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.279097 | orchestrator | Saturday 12 July 2025 13:41:44 +0000 (0:00:00.625) 0:00:56.671 ********* 2025-07-12 13:41:51.279108 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279119 | orchestrator | 2025-07-12 13:41:51.279129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.279140 | orchestrator | Saturday 12 July 2025 13:41:45 +0000 (0:00:00.202) 0:00:56.873 ********* 2025-07-12 13:41:51.279151 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279162 | orchestrator | 2025-07-12 13:41:51.279172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.279183 | orchestrator | Saturday 12 July 2025 13:41:45 +0000 (0:00:00.199) 0:00:57.072 ********* 2025-07-12 13:41:51.279194 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279204 | orchestrator | 2025-07-12 13:41:51.279215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 13:41:51.279226 | orchestrator | Saturday 12 July 2025 13:41:45 +0000 (0:00:00.203) 0:00:57.275 ********* 2025-07-12 13:41:51.279237 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279248 | orchestrator | 2025-07-12 13:41:51.279258 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 13:41:51.279269 | orchestrator | Saturday 12 July 2025 13:41:45 +0000 (0:00:00.197) 0:00:57.472 ********* 2025-07-12 13:41:51.279280 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279291 | orchestrator | 2025-07-12 13:41:51.279301 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 13:41:51.279312 | orchestrator | Saturday 12 July 2025 13:41:46 +0000 (0:00:00.365) 0:00:57.838 ********* 2025-07-12 13:41:51.279323 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2177925c-0e94-5467-9f04-b37733dbe47a'}}) 2025-07-12 13:41:51.279335 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10b3d195-009d-5006-b5f6-1b7aa1316d97'}}) 2025-07-12 13:41:51.279345 | orchestrator | 2025-07-12 13:41:51.279356 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 13:41:51.279367 | orchestrator | Saturday 12 July 2025 13:41:46 +0000 (0:00:00.194) 0:00:58.032 ********* 2025-07-12 13:41:51.279395 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'}) 2025-07-12 13:41:51.279416 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'}) 2025-07-12 13:41:51.279427 | orchestrator | 2025-07-12 13:41:51.279438 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 13:41:51.279488 | orchestrator | Saturday 12 July 2025 13:41:48 +0000 (0:00:01.836) 0:00:59.869 ********* 2025-07-12 13:41:51.279501 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:51.279514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:51.279525 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279535 | orchestrator | 2025-07-12 13:41:51.279547 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 13:41:51.279558 | orchestrator | Saturday 12 July 2025 13:41:48 +0000 (0:00:00.171) 0:01:00.040 ********* 2025-07-12 13:41:51.279569 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'}) 2025-07-12 13:41:51.279600 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'}) 2025-07-12 13:41:51.279612 | orchestrator | 2025-07-12 13:41:51.279623 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 13:41:51.279634 | orchestrator | Saturday 12 July 2025 13:41:49 +0000 (0:00:01.332) 0:01:01.373 ********* 2025-07-12 13:41:51.279645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:51.279656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:51.279667 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279678 | orchestrator | 2025-07-12 13:41:51.279689 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 13:41:51.279700 | orchestrator | Saturday 12 July 2025 13:41:49 +0000 (0:00:00.151) 0:01:01.524 ********* 2025-07-12 13:41:51.279711 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279722 | orchestrator | 2025-07-12 13:41:51.279733 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 13:41:51.279744 | orchestrator | Saturday 12 July 2025 13:41:49 +0000 (0:00:00.148) 0:01:01.673 ********* 2025-07-12 13:41:51.279755 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:51.279766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:51.279778 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279788 | orchestrator | 2025-07-12 13:41:51.279800 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 13:41:51.279811 | orchestrator | Saturday 12 July 2025 13:41:50 +0000 (0:00:00.152) 0:01:01.825 ********* 2025-07-12 13:41:51.279821 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279832 | orchestrator | 2025-07-12 13:41:51.279843 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 13:41:51.279854 | orchestrator | Saturday 12 July 2025 13:41:50 +0000 (0:00:00.145) 0:01:01.971 ********* 2025-07-12 13:41:51.279865 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:51.279883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:51.279894 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279905 | orchestrator | 2025-07-12 13:41:51.279916 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 13:41:51.279927 | orchestrator | Saturday 12 July 2025 13:41:50 +0000 (0:00:00.162) 0:01:02.134 ********* 2025-07-12 13:41:51.279938 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.279949 | orchestrator | 2025-07-12 13:41:51.279960 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 13:41:51.279971 | orchestrator | Saturday 12 July 2025 13:41:50 +0000 (0:00:00.142) 0:01:02.276 ********* 2025-07-12 13:41:51.279982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:51.279993 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:51.280004 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:51.280015 | orchestrator | 2025-07-12 13:41:51.280031 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 13:41:51.280042 | orchestrator | Saturday 12 July 2025 13:41:50 +0000 (0:00:00.151) 0:01:02.427 ********* 2025-07-12 13:41:51.280053 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:51.280064 | orchestrator | 2025-07-12 13:41:51.280076 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 13:41:51.280087 | orchestrator | Saturday 12 July 2025 13:41:50 +0000 (0:00:00.151) 0:01:02.579 ********* 2025-07-12 13:41:51.280106 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:57.433408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:57.433605 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.433635 | orchestrator | 2025-07-12 13:41:57.433656 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 13:41:57.433679 | orchestrator | Saturday 12 July 2025 13:41:51 +0000 (0:00:00.396) 0:01:02.975 ********* 2025-07-12 13:41:57.433698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:57.433714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:57.433726 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.433737 | orchestrator | 2025-07-12 13:41:57.433748 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 13:41:57.433759 | orchestrator | Saturday 12 July 2025 13:41:51 +0000 (0:00:00.169) 0:01:03.145 ********* 2025-07-12 13:41:57.433770 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:57.433781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:57.433792 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.433803 | orchestrator | 2025-07-12 13:41:57.433814 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 13:41:57.433825 | orchestrator | Saturday 12 July 2025 13:41:51 +0000 (0:00:00.157) 0:01:03.302 ********* 2025-07-12 13:41:57.433836 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.433847 | orchestrator | 2025-07-12 13:41:57.433858 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 13:41:57.433897 | orchestrator | Saturday 12 July 2025 13:41:51 +0000 (0:00:00.147) 0:01:03.450 ********* 2025-07-12 13:41:57.433908 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.433921 | orchestrator | 2025-07-12 13:41:57.433934 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 13:41:57.433946 | orchestrator | Saturday 12 July 2025 13:41:51 +0000 (0:00:00.145) 0:01:03.595 ********* 2025-07-12 13:41:57.433958 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.433970 | orchestrator | 2025-07-12 13:41:57.433982 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 13:41:57.433994 | orchestrator | Saturday 12 July 2025 13:41:52 +0000 (0:00:00.158) 0:01:03.754 ********* 2025-07-12 13:41:57.434006 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:41:57.434083 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 13:41:57.434097 | orchestrator | } 2025-07-12 13:41:57.434109 | orchestrator | 2025-07-12 13:41:57.434120 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 13:41:57.434131 | orchestrator | Saturday 12 July 2025 13:41:52 +0000 (0:00:00.146) 0:01:03.900 ********* 2025-07-12 13:41:57.434142 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:41:57.434153 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 13:41:57.434163 | orchestrator | } 2025-07-12 13:41:57.434174 | orchestrator | 2025-07-12 13:41:57.434185 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 13:41:57.434196 | orchestrator | Saturday 12 July 2025 13:41:52 +0000 (0:00:00.147) 0:01:04.048 ********* 2025-07-12 13:41:57.434207 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:41:57.434218 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 13:41:57.434229 | orchestrator | } 2025-07-12 13:41:57.434240 | orchestrator | 2025-07-12 13:41:57.434251 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 13:41:57.434262 | orchestrator | Saturday 12 July 2025 13:41:52 +0000 (0:00:00.140) 0:01:04.188 ********* 2025-07-12 13:41:57.434273 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:57.434284 | orchestrator | 2025-07-12 13:41:57.434295 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 13:41:57.434307 | orchestrator | Saturday 12 July 2025 13:41:53 +0000 (0:00:00.522) 0:01:04.710 ********* 2025-07-12 13:41:57.434318 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:57.434329 | orchestrator | 2025-07-12 13:41:57.434340 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 13:41:57.434351 | orchestrator | Saturday 12 July 2025 13:41:53 +0000 (0:00:00.521) 0:01:05.232 ********* 2025-07-12 13:41:57.434362 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:57.434372 | orchestrator | 2025-07-12 13:41:57.434383 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 13:41:57.434394 | orchestrator | Saturday 12 July 2025 13:41:54 +0000 (0:00:00.496) 0:01:05.728 ********* 2025-07-12 13:41:57.434405 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:41:57.434416 | orchestrator | 2025-07-12 13:41:57.434427 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 13:41:57.434437 | orchestrator | Saturday 12 July 2025 13:41:54 +0000 (0:00:00.363) 0:01:06.092 ********* 2025-07-12 13:41:57.434484 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.434504 | orchestrator | 2025-07-12 13:41:57.434544 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 13:41:57.434565 | orchestrator | Saturday 12 July 2025 13:41:54 +0000 (0:00:00.113) 0:01:06.206 ********* 2025-07-12 13:41:57.434583 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.434602 | orchestrator | 2025-07-12 13:41:57.434620 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 13:41:57.434641 | orchestrator | Saturday 12 July 2025 13:41:54 +0000 (0:00:00.114) 0:01:06.321 ********* 2025-07-12 13:41:57.434660 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:41:57.434695 | orchestrator |  "vgs_report": { 2025-07-12 13:41:57.434716 | orchestrator |  "vg": [] 2025-07-12 13:41:57.434763 | orchestrator |  } 2025-07-12 13:41:57.434786 | orchestrator | } 2025-07-12 13:41:57.434805 | orchestrator | 2025-07-12 13:41:57.434826 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 13:41:57.434838 | orchestrator | Saturday 12 July 2025 13:41:54 +0000 (0:00:00.147) 0:01:06.468 ********* 2025-07-12 13:41:57.434849 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.434860 | orchestrator | 2025-07-12 13:41:57.434872 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 13:41:57.434883 | orchestrator | Saturday 12 July 2025 13:41:54 +0000 (0:00:00.134) 0:01:06.603 ********* 2025-07-12 13:41:57.434894 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.434905 | orchestrator | 2025-07-12 13:41:57.434915 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 13:41:57.434926 | orchestrator | Saturday 12 July 2025 13:41:55 +0000 (0:00:00.133) 0:01:06.736 ********* 2025-07-12 13:41:57.434937 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.434948 | orchestrator | 2025-07-12 13:41:57.434959 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 13:41:57.434970 | orchestrator | Saturday 12 July 2025 13:41:55 +0000 (0:00:00.150) 0:01:06.887 ********* 2025-07-12 13:41:57.434981 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.434992 | orchestrator | 2025-07-12 13:41:57.435003 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 13:41:57.435014 | orchestrator | Saturday 12 July 2025 13:41:55 +0000 (0:00:00.131) 0:01:07.018 ********* 2025-07-12 13:41:57.435025 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435036 | orchestrator | 2025-07-12 13:41:57.435047 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 13:41:57.435057 | orchestrator | Saturday 12 July 2025 13:41:55 +0000 (0:00:00.140) 0:01:07.159 ********* 2025-07-12 13:41:57.435068 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435079 | orchestrator | 2025-07-12 13:41:57.435090 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 13:41:57.435101 | orchestrator | Saturday 12 July 2025 13:41:55 +0000 (0:00:00.126) 0:01:07.285 ********* 2025-07-12 13:41:57.435112 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435123 | orchestrator | 2025-07-12 13:41:57.435134 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 13:41:57.435145 | orchestrator | Saturday 12 July 2025 13:41:55 +0000 (0:00:00.137) 0:01:07.423 ********* 2025-07-12 13:41:57.435156 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435167 | orchestrator | 2025-07-12 13:41:57.435178 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 13:41:57.435189 | orchestrator | Saturday 12 July 2025 13:41:55 +0000 (0:00:00.158) 0:01:07.581 ********* 2025-07-12 13:41:57.435200 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435210 | orchestrator | 2025-07-12 13:41:57.435221 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 13:41:57.435232 | orchestrator | Saturday 12 July 2025 13:41:56 +0000 (0:00:00.332) 0:01:07.913 ********* 2025-07-12 13:41:57.435243 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435254 | orchestrator | 2025-07-12 13:41:57.435265 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 13:41:57.435276 | orchestrator | Saturday 12 July 2025 13:41:56 +0000 (0:00:00.151) 0:01:08.065 ********* 2025-07-12 13:41:57.435287 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435298 | orchestrator | 2025-07-12 13:41:57.435309 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 13:41:57.435320 | orchestrator | Saturday 12 July 2025 13:41:56 +0000 (0:00:00.139) 0:01:08.204 ********* 2025-07-12 13:41:57.435331 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435341 | orchestrator | 2025-07-12 13:41:57.435352 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 13:41:57.435372 | orchestrator | Saturday 12 July 2025 13:41:56 +0000 (0:00:00.154) 0:01:08.358 ********* 2025-07-12 13:41:57.435383 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435394 | orchestrator | 2025-07-12 13:41:57.435405 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 13:41:57.435416 | orchestrator | Saturday 12 July 2025 13:41:56 +0000 (0:00:00.152) 0:01:08.511 ********* 2025-07-12 13:41:57.435426 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435437 | orchestrator | 2025-07-12 13:41:57.435468 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 13:41:57.435480 | orchestrator | Saturday 12 July 2025 13:41:56 +0000 (0:00:00.149) 0:01:08.661 ********* 2025-07-12 13:41:57.435491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:57.435502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:57.435514 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435525 | orchestrator | 2025-07-12 13:41:57.435536 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 13:41:57.435547 | orchestrator | Saturday 12 July 2025 13:41:57 +0000 (0:00:00.151) 0:01:08.812 ********* 2025-07-12 13:41:57.435565 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:41:57.435576 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:41:57.435588 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:41:57.435599 | orchestrator | 2025-07-12 13:41:57.435610 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 13:41:57.435621 | orchestrator | Saturday 12 July 2025 13:41:57 +0000 (0:00:00.154) 0:01:08.967 ********* 2025-07-12 13:41:57.435640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:42:00.441176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:42:00.441292 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:00.441309 | orchestrator | 2025-07-12 13:42:00.441322 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 13:42:00.441335 | orchestrator | Saturday 12 July 2025 13:41:57 +0000 (0:00:00.161) 0:01:09.128 ********* 2025-07-12 13:42:00.441347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:42:00.441358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:42:00.441369 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:00.441380 | orchestrator | 2025-07-12 13:42:00.441391 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 13:42:00.441402 | orchestrator | Saturday 12 July 2025 13:41:57 +0000 (0:00:00.146) 0:01:09.274 ********* 2025-07-12 13:42:00.441413 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:42:00.441424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:42:00.441478 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:00.441491 | orchestrator | 2025-07-12 13:42:00.441527 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 13:42:00.441540 | orchestrator | Saturday 12 July 2025 13:41:57 +0000 (0:00:00.150) 0:01:09.424 ********* 2025-07-12 13:42:00.441551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:42:00.441563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:42:00.441574 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:00.441585 | orchestrator | 2025-07-12 13:42:00.441596 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 13:42:00.441607 | orchestrator | Saturday 12 July 2025 13:41:57 +0000 (0:00:00.153) 0:01:09.578 ********* 2025-07-12 13:42:00.441618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:42:00.441629 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:42:00.441640 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:00.441651 | orchestrator | 2025-07-12 13:42:00.441662 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 13:42:00.441673 | orchestrator | Saturday 12 July 2025 13:41:58 +0000 (0:00:00.357) 0:01:09.935 ********* 2025-07-12 13:42:00.441683 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:42:00.441694 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:42:00.441705 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:00.441716 | orchestrator | 2025-07-12 13:42:00.441727 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 13:42:00.441738 | orchestrator | Saturday 12 July 2025 13:41:58 +0000 (0:00:00.165) 0:01:10.101 ********* 2025-07-12 13:42:00.441749 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:42:00.441760 | orchestrator | 2025-07-12 13:42:00.441771 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 13:42:00.441782 | orchestrator | Saturday 12 July 2025 13:41:58 +0000 (0:00:00.519) 0:01:10.620 ********* 2025-07-12 13:42:00.441793 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:42:00.441804 | orchestrator | 2025-07-12 13:42:00.441815 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 13:42:00.441826 | orchestrator | Saturday 12 July 2025 13:41:59 +0000 (0:00:00.549) 0:01:11.170 ********* 2025-07-12 13:42:00.441837 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:42:00.441847 | orchestrator | 2025-07-12 13:42:00.441859 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 13:42:00.441869 | orchestrator | Saturday 12 July 2025 13:41:59 +0000 (0:00:00.147) 0:01:11.318 ********* 2025-07-12 13:42:00.441880 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'vg_name': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'}) 2025-07-12 13:42:00.441898 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'vg_name': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'}) 2025-07-12 13:42:00.441913 | orchestrator | 2025-07-12 13:42:00.441924 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 13:42:00.441935 | orchestrator | Saturday 12 July 2025 13:41:59 +0000 (0:00:00.171) 0:01:11.489 ********* 2025-07-12 13:42:00.441965 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:42:00.441977 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:42:00.441996 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:00.442007 | orchestrator | 2025-07-12 13:42:00.442069 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 13:42:00.442083 | orchestrator | Saturday 12 July 2025 13:41:59 +0000 (0:00:00.155) 0:01:11.645 ********* 2025-07-12 13:42:00.442094 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:42:00.442106 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:42:00.442149 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:00.442162 | orchestrator | 2025-07-12 13:42:00.442173 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 13:42:00.442184 | orchestrator | Saturday 12 July 2025 13:42:00 +0000 (0:00:00.165) 0:01:11.810 ********* 2025-07-12 13:42:00.442195 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'})  2025-07-12 13:42:00.442206 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'})  2025-07-12 13:42:00.442217 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:00.442227 | orchestrator | 2025-07-12 13:42:00.442238 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 13:42:00.442267 | orchestrator | Saturday 12 July 2025 13:42:00 +0000 (0:00:00.166) 0:01:11.977 ********* 2025-07-12 13:42:00.442278 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 13:42:00.442290 | orchestrator |  "lvm_report": { 2025-07-12 13:42:00.442301 | orchestrator |  "lv": [ 2025-07-12 13:42:00.442312 | orchestrator |  { 2025-07-12 13:42:00.442323 | orchestrator |  "lv_name": "osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97", 2025-07-12 13:42:00.442335 | orchestrator |  "vg_name": "ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97" 2025-07-12 13:42:00.442345 | orchestrator |  }, 2025-07-12 13:42:00.442356 | orchestrator |  { 2025-07-12 13:42:00.442367 | orchestrator |  "lv_name": "osd-block-2177925c-0e94-5467-9f04-b37733dbe47a", 2025-07-12 13:42:00.442378 | orchestrator |  "vg_name": "ceph-2177925c-0e94-5467-9f04-b37733dbe47a" 2025-07-12 13:42:00.442389 | orchestrator |  } 2025-07-12 13:42:00.442400 | orchestrator |  ], 2025-07-12 13:42:00.442411 | orchestrator |  "pv": [ 2025-07-12 13:42:00.442422 | orchestrator |  { 2025-07-12 13:42:00.442433 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 13:42:00.442463 | orchestrator |  "vg_name": "ceph-2177925c-0e94-5467-9f04-b37733dbe47a" 2025-07-12 13:42:00.442474 | orchestrator |  }, 2025-07-12 13:42:00.442485 | orchestrator |  { 2025-07-12 13:42:00.442496 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 13:42:00.442507 | orchestrator |  "vg_name": "ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97" 2025-07-12 13:42:00.442518 | orchestrator |  } 2025-07-12 13:42:00.442529 | orchestrator |  ] 2025-07-12 13:42:00.442540 | orchestrator |  } 2025-07-12 13:42:00.442551 | orchestrator | } 2025-07-12 13:42:00.442562 | orchestrator | 2025-07-12 13:42:00.442573 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:42:00.442585 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 13:42:00.442596 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 13:42:00.442616 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 13:42:00.442627 | orchestrator | 2025-07-12 13:42:00.442638 | orchestrator | 2025-07-12 13:42:00.442649 | orchestrator | 2025-07-12 13:42:00.442660 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:42:00.442671 | orchestrator | Saturday 12 July 2025 13:42:00 +0000 (0:00:00.133) 0:01:12.111 ********* 2025-07-12 13:42:00.442682 | orchestrator | =============================================================================== 2025-07-12 13:42:00.442693 | orchestrator | Create block VGs -------------------------------------------------------- 5.71s 2025-07-12 13:42:00.442709 | orchestrator | Create block LVs -------------------------------------------------------- 4.07s 2025-07-12 13:42:00.442721 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.94s 2025-07-12 13:42:00.442732 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2025-07-12 13:42:00.442743 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.56s 2025-07-12 13:42:00.442754 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2025-07-12 13:42:00.442765 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2025-07-12 13:42:00.442776 | orchestrator | Add known partitions to the list of available block devices ------------- 1.44s 2025-07-12 13:42:00.442795 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2025-07-12 13:42:00.791270 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2025-07-12 13:42:00.791383 | orchestrator | Print LVM report data --------------------------------------------------- 0.91s 2025-07-12 13:42:00.791391 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-07-12 13:42:00.791396 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2025-07-12 13:42:00.791401 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.70s 2025-07-12 13:42:00.791406 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-07-12 13:42:00.791411 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.68s 2025-07-12 13:42:00.791416 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.68s 2025-07-12 13:42:00.791487 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2025-07-12 13:42:00.791495 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.67s 2025-07-12 13:42:00.791500 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-07-12 13:42:12.920676 | orchestrator | 2025-07-12 13:42:12 | INFO  | Task d90462dc-a7f2-421f-b2e9-ddccc5be3612 (facts) was prepared for execution. 2025-07-12 13:42:12.920774 | orchestrator | 2025-07-12 13:42:12 | INFO  | It takes a moment until task d90462dc-a7f2-421f-b2e9-ddccc5be3612 (facts) has been started and output is visible here. 2025-07-12 13:42:25.103892 | orchestrator | 2025-07-12 13:42:25.104003 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 13:42:25.104018 | orchestrator | 2025-07-12 13:42:25.104029 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 13:42:25.104040 | orchestrator | Saturday 12 July 2025 13:42:16 +0000 (0:00:00.275) 0:00:00.275 ********* 2025-07-12 13:42:25.104050 | orchestrator | ok: [testbed-manager] 2025-07-12 13:42:25.104062 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:42:25.104072 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:42:25.104082 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:42:25.104092 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:42:25.104101 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:42:25.104111 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:42:25.104121 | orchestrator | 2025-07-12 13:42:25.104131 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 13:42:25.104169 | orchestrator | Saturday 12 July 2025 13:42:18 +0000 (0:00:01.110) 0:00:01.385 ********* 2025-07-12 13:42:25.104179 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:42:25.104190 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:42:25.104200 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:42:25.104209 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:42:25.104219 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:42:25.104229 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:42:25.104238 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:25.104248 | orchestrator | 2025-07-12 13:42:25.104258 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 13:42:25.104267 | orchestrator | 2025-07-12 13:42:25.104277 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 13:42:25.104287 | orchestrator | Saturday 12 July 2025 13:42:19 +0000 (0:00:01.244) 0:00:02.630 ********* 2025-07-12 13:42:25.104297 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:42:25.104307 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:42:25.104316 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:42:25.104326 | orchestrator | ok: [testbed-manager] 2025-07-12 13:42:25.104336 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:42:25.104357 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:42:25.104367 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:42:25.104377 | orchestrator | 2025-07-12 13:42:25.104408 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 13:42:25.104420 | orchestrator | 2025-07-12 13:42:25.104430 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 13:42:25.104441 | orchestrator | Saturday 12 July 2025 13:42:24 +0000 (0:00:04.849) 0:00:07.480 ********* 2025-07-12 13:42:25.104452 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:42:25.104463 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:42:25.104473 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:42:25.104484 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:42:25.104495 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:42:25.104505 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:42:25.104516 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:42:25.104527 | orchestrator | 2025-07-12 13:42:25.104537 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:42:25.104566 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:25.104579 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:25.104605 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:25.104616 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:25.104626 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:25.104637 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:25.104647 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:42:25.104658 | orchestrator | 2025-07-12 13:42:25.104668 | orchestrator | 2025-07-12 13:42:25.104679 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:42:25.104690 | orchestrator | Saturday 12 July 2025 13:42:24 +0000 (0:00:00.521) 0:00:08.001 ********* 2025-07-12 13:42:25.104702 | orchestrator | =============================================================================== 2025-07-12 13:42:25.104723 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2025-07-12 13:42:25.104734 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-07-12 13:42:25.104745 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-07-12 13:42:25.104756 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-07-12 13:42:25.320600 | orchestrator | 2025-07-12 13:42:25.322520 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jul 12 13:42:25 UTC 2025 2025-07-12 13:42:25.322544 | orchestrator | 2025-07-12 13:42:26.923128 | orchestrator | 2025-07-12 13:42:26 | INFO  | Collection nutshell is prepared for execution 2025-07-12 13:42:26.923234 | orchestrator | 2025-07-12 13:42:26 | INFO  | D [0] - dotfiles 2025-07-12 13:42:36.937450 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [0] - homer 2025-07-12 13:42:36.937577 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [0] - netdata 2025-07-12 13:42:36.937593 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [0] - openstackclient 2025-07-12 13:42:36.937604 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [0] - phpmyadmin 2025-07-12 13:42:36.937655 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [0] - common 2025-07-12 13:42:36.942175 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [1] -- loadbalancer 2025-07-12 13:42:36.942234 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [2] --- opensearch 2025-07-12 13:42:36.942743 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [2] --- mariadb-ng 2025-07-12 13:42:36.943038 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [3] ---- horizon 2025-07-12 13:42:36.943119 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [3] ---- keystone 2025-07-12 13:42:36.943417 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [4] ----- neutron 2025-07-12 13:42:36.943743 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [5] ------ wait-for-nova 2025-07-12 13:42:36.944014 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [5] ------ octavia 2025-07-12 13:42:36.945458 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [4] ----- barbican 2025-07-12 13:42:36.946412 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [4] ----- designate 2025-07-12 13:42:36.946441 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [4] ----- ironic 2025-07-12 13:42:36.946453 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [4] ----- placement 2025-07-12 13:42:36.946464 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [4] ----- magnum 2025-07-12 13:42:36.946840 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [1] -- openvswitch 2025-07-12 13:42:36.947083 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [2] --- ovn 2025-07-12 13:42:36.947484 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [1] -- memcached 2025-07-12 13:42:36.947962 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [1] -- redis 2025-07-12 13:42:36.948030 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [1] -- rabbitmq-ng 2025-07-12 13:42:36.948055 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [0] - kubernetes 2025-07-12 13:42:36.950640 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [1] -- kubeconfig 2025-07-12 13:42:36.950685 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [1] -- copy-kubeconfig 2025-07-12 13:42:36.950829 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [0] - ceph 2025-07-12 13:42:36.953032 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [1] -- ceph-pools 2025-07-12 13:42:36.953068 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [2] --- copy-ceph-keys 2025-07-12 13:42:36.953082 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [3] ---- cephclient 2025-07-12 13:42:36.953849 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-07-12 13:42:36.953899 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [4] ----- wait-for-keystone 2025-07-12 13:42:36.954055 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [5] ------ kolla-ceph-rgw 2025-07-12 13:42:36.954077 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [5] ------ glance 2025-07-12 13:42:36.954088 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [5] ------ cinder 2025-07-12 13:42:36.954177 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [5] ------ nova 2025-07-12 13:42:36.954199 | orchestrator | 2025-07-12 13:42:36 | INFO  | A [4] ----- prometheus 2025-07-12 13:42:36.954211 | orchestrator | 2025-07-12 13:42:36 | INFO  | D [5] ------ grafana 2025-07-12 13:42:37.188129 | orchestrator | 2025-07-12 13:42:37 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-07-12 13:42:37.188224 | orchestrator | 2025-07-12 13:42:37 | INFO  | Tasks are running in the background 2025-07-12 13:42:40.090861 | orchestrator | 2025-07-12 13:42:40 | INFO  | No task IDs specified, wait for all currently running tasks 2025-07-12 13:42:42.229010 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task e36eb811-ad30-41c6-80d9-39a3d2e9f1b2 is in state STARTED 2025-07-12 13:42:42.229100 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:42:42.229674 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:42:42.230160 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:42:42.230664 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:42:42.231410 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:42:42.231819 | orchestrator | 2025-07-12 13:42:42 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:42:42.231930 | orchestrator | 2025-07-12 13:42:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:45.273791 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task e36eb811-ad30-41c6-80d9-39a3d2e9f1b2 is in state STARTED 2025-07-12 13:42:45.273875 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:42:45.273894 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:42:45.274554 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:42:45.278245 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:42:45.278889 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:42:45.280854 | orchestrator | 2025-07-12 13:42:45 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:42:45.280915 | orchestrator | 2025-07-12 13:42:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:48.337692 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task e36eb811-ad30-41c6-80d9-39a3d2e9f1b2 is in state STARTED 2025-07-12 13:42:48.337881 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:42:48.342484 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:42:48.342838 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:42:48.343518 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:42:48.343921 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:42:48.344542 | orchestrator | 2025-07-12 13:42:48 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:42:48.344567 | orchestrator | 2025-07-12 13:42:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:51.387530 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task e36eb811-ad30-41c6-80d9-39a3d2e9f1b2 is in state STARTED 2025-07-12 13:42:51.390437 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:42:51.395869 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:42:51.399520 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:42:51.399990 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:42:51.401658 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:42:51.402195 | orchestrator | 2025-07-12 13:42:51 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:42:51.402221 | orchestrator | 2025-07-12 13:42:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:54.454926 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task e36eb811-ad30-41c6-80d9-39a3d2e9f1b2 is in state STARTED 2025-07-12 13:42:54.455016 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:42:54.456216 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:42:54.459688 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:42:54.459713 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:42:54.459725 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:42:54.460287 | orchestrator | 2025-07-12 13:42:54 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:42:54.460307 | orchestrator | 2025-07-12 13:42:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:42:57.534409 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task e36eb811-ad30-41c6-80d9-39a3d2e9f1b2 is in state STARTED 2025-07-12 13:42:57.535205 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:42:57.536747 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:42:57.536833 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:42:57.537264 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:42:57.538181 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:42:57.542709 | orchestrator | 2025-07-12 13:42:57 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:42:57.542758 | orchestrator | 2025-07-12 13:42:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:00.594932 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task e36eb811-ad30-41c6-80d9-39a3d2e9f1b2 is in state STARTED 2025-07-12 13:43:00.601073 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:00.604848 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:00.611612 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:00.614117 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:00.619911 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:43:00.625524 | orchestrator | 2025-07-12 13:43:00 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:00.625571 | orchestrator | 2025-07-12 13:43:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:03.665523 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task e36eb811-ad30-41c6-80d9-39a3d2e9f1b2 is in state STARTED 2025-07-12 13:43:03.672360 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:03.672396 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:03.672408 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:03.677654 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:03.677681 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:43:03.678108 | orchestrator | 2025-07-12 13:43:03 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:03.678131 | orchestrator | 2025-07-12 13:43:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:06.722731 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task e36eb811-ad30-41c6-80d9-39a3d2e9f1b2 is in state SUCCESS 2025-07-12 13:43:06.724185 | orchestrator | 2025-07-12 13:43:06.724216 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-07-12 13:43:06.724229 | orchestrator | 2025-07-12 13:43:06.724241 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-07-12 13:43:06.724253 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:00.842) 0:00:00.842 ********* 2025-07-12 13:43:06.724264 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:06.724276 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:06.724315 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:06.724327 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:06.724338 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:06.724349 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:06.724360 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:06.724371 | orchestrator | 2025-07-12 13:43:06.724382 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-07-12 13:43:06.724393 | orchestrator | Saturday 12 July 2025 13:42:53 +0000 (0:00:04.923) 0:00:05.769 ********* 2025-07-12 13:43:06.724405 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 13:43:06.724416 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 13:43:06.724427 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 13:43:06.724449 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 13:43:06.724461 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 13:43:06.724473 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 13:43:06.724510 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 13:43:06.724522 | orchestrator | 2025-07-12 13:43:06.724533 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-07-12 13:43:06.724544 | orchestrator | Saturday 12 July 2025 13:42:55 +0000 (0:00:01.897) 0:00:07.666 ********* 2025-07-12 13:43:06.724559 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:54.394984', 'end': '2025-07-12 13:42:54.399726', 'delta': '0:00:00.004742', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:43:06.724581 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:54.488379', 'end': '2025-07-12 13:42:54.496820', 'delta': '0:00:00.008441', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:43:06.724594 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:54.421699', 'end': '2025-07-12 13:42:54.428710', 'delta': '0:00:00.007011', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:43:06.724687 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:54.628767', 'end': '2025-07-12 13:42:54.637120', 'delta': '0:00:00.008353', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:43:06.724702 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:54.873705', 'end': '2025-07-12 13:42:54.878832', 'delta': '0:00:00.005127', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:43:06.724722 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:55.025566', 'end': '2025-07-12 13:42:55.031670', 'delta': '0:00:00.006104', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:43:06.724734 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 13:42:55.436429', 'end': '2025-07-12 13:42:55.446728', 'delta': '0:00:00.010299', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 13:43:06.724746 | orchestrator | 2025-07-12 13:43:06.724757 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-07-12 13:43:06.724769 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:03.284) 0:00:10.950 ********* 2025-07-12 13:43:06.724780 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 13:43:06.724791 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 13:43:06.724802 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 13:43:06.724813 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 13:43:06.724824 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 13:43:06.724835 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 13:43:06.724846 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 13:43:06.724857 | orchestrator | 2025-07-12 13:43:06.724868 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-07-12 13:43:06.724880 | orchestrator | Saturday 12 July 2025 13:43:00 +0000 (0:00:01.447) 0:00:12.398 ********* 2025-07-12 13:43:06.724891 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-07-12 13:43:06.724902 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 13:43:06.724913 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 13:43:06.724923 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 13:43:06.724934 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 13:43:06.724950 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 13:43:06.724962 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 13:43:06.724973 | orchestrator | 2025-07-12 13:43:06.724984 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:43:06.725016 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:06.725030 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:06.725042 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:06.725053 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:06.725064 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:06.725075 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:06.725086 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:06.725097 | orchestrator | 2025-07-12 13:43:06.725108 | orchestrator | 2025-07-12 13:43:06.725119 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:43:06.725130 | orchestrator | Saturday 12 July 2025 13:43:04 +0000 (0:00:04.041) 0:00:16.439 ********* 2025-07-12 13:43:06.725141 | orchestrator | =============================================================================== 2025-07-12 13:43:06.725152 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.93s 2025-07-12 13:43:06.725163 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.04s 2025-07-12 13:43:06.725174 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.28s 2025-07-12 13:43:06.725185 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.90s 2025-07-12 13:43:06.725196 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.45s 2025-07-12 13:43:06.725633 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:06.727969 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:06.731238 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:06.734175 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:06.734217 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:43:06.737558 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:06.737591 | orchestrator | 2025-07-12 13:43:06 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:06.737603 | orchestrator | 2025-07-12 13:43:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:09.783086 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:09.784463 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:09.786557 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:09.789018 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:09.789477 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:43:09.790415 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:09.791072 | orchestrator | 2025-07-12 13:43:09 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:09.791102 | orchestrator | 2025-07-12 13:43:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:12.849504 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:12.851143 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:12.856869 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:12.859828 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:12.863424 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:43:12.865114 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:12.865637 | orchestrator | 2025-07-12 13:43:12 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:12.865663 | orchestrator | 2025-07-12 13:43:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:15.906488 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:15.907085 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:15.907969 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:15.911156 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:15.911180 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:43:15.911651 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:15.912082 | orchestrator | 2025-07-12 13:43:15 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:15.912188 | orchestrator | 2025-07-12 13:43:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:18.966797 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:18.966886 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:18.967119 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:18.967218 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:18.967924 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:43:18.968384 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:18.968773 | orchestrator | 2025-07-12 13:43:18 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:18.968794 | orchestrator | 2025-07-12 13:43:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:22.023849 | orchestrator | 2025-07-12 13:43:22 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:22.023973 | orchestrator | 2025-07-12 13:43:22 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:22.024313 | orchestrator | 2025-07-12 13:43:22 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:22.024765 | orchestrator | 2025-07-12 13:43:22 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:22.025386 | orchestrator | 2025-07-12 13:43:22 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state STARTED 2025-07-12 13:43:22.025847 | orchestrator | 2025-07-12 13:43:22 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:22.026642 | orchestrator | 2025-07-12 13:43:22 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:22.026726 | orchestrator | 2025-07-12 13:43:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:25.090419 | orchestrator | 2025-07-12 13:43:25 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:25.097334 | orchestrator | 2025-07-12 13:43:25 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:25.099289 | orchestrator | 2025-07-12 13:43:25 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:25.106114 | orchestrator | 2025-07-12 13:43:25 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:25.107651 | orchestrator | 2025-07-12 13:43:25 | INFO  | Task 7fa7add1-77aa-4610-a61a-6eea8f78dbf9 is in state SUCCESS 2025-07-12 13:43:25.107691 | orchestrator | 2025-07-12 13:43:25 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:25.108623 | orchestrator | 2025-07-12 13:43:25 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:25.108660 | orchestrator | 2025-07-12 13:43:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:28.171635 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:28.174700 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:28.174742 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:28.174756 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:28.179226 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:28.181053 | orchestrator | 2025-07-12 13:43:28 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:28.181075 | orchestrator | 2025-07-12 13:43:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:31.226507 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:31.226594 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:31.226604 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:31.235016 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state STARTED 2025-07-12 13:43:31.235141 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:31.238495 | orchestrator | 2025-07-12 13:43:31 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:31.238565 | orchestrator | 2025-07-12 13:43:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:34.276627 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:34.278560 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:34.280545 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:34.283211 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task a4ef7000-470d-4f14-a331-10f1a44808e7 is in state SUCCESS 2025-07-12 13:43:34.283902 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:34.285122 | orchestrator | 2025-07-12 13:43:34 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:34.285386 | orchestrator | 2025-07-12 13:43:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:37.314793 | orchestrator | 2025-07-12 13:43:37 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:37.315501 | orchestrator | 2025-07-12 13:43:37 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:37.317344 | orchestrator | 2025-07-12 13:43:37 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:37.318691 | orchestrator | 2025-07-12 13:43:37 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:37.320708 | orchestrator | 2025-07-12 13:43:37 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:37.320739 | orchestrator | 2025-07-12 13:43:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:40.397819 | orchestrator | 2025-07-12 13:43:40 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:40.405462 | orchestrator | 2025-07-12 13:43:40 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:40.410518 | orchestrator | 2025-07-12 13:43:40 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:40.411919 | orchestrator | 2025-07-12 13:43:40 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:40.413919 | orchestrator | 2025-07-12 13:43:40 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:40.413989 | orchestrator | 2025-07-12 13:43:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:43.525035 | orchestrator | 2025-07-12 13:43:43 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:43.528297 | orchestrator | 2025-07-12 13:43:43 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:43.528332 | orchestrator | 2025-07-12 13:43:43 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:43.531796 | orchestrator | 2025-07-12 13:43:43 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:43.532728 | orchestrator | 2025-07-12 13:43:43 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:43.533387 | orchestrator | 2025-07-12 13:43:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:46.574732 | orchestrator | 2025-07-12 13:43:46 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:46.579849 | orchestrator | 2025-07-12 13:43:46 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:46.579885 | orchestrator | 2025-07-12 13:43:46 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:46.581045 | orchestrator | 2025-07-12 13:43:46 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:46.581069 | orchestrator | 2025-07-12 13:43:46 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:46.581082 | orchestrator | 2025-07-12 13:43:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:49.649094 | orchestrator | 2025-07-12 13:43:49 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:49.651102 | orchestrator | 2025-07-12 13:43:49 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:49.651185 | orchestrator | 2025-07-12 13:43:49 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:49.652895 | orchestrator | 2025-07-12 13:43:49 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state STARTED 2025-07-12 13:43:49.654065 | orchestrator | 2025-07-12 13:43:49 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:49.654107 | orchestrator | 2025-07-12 13:43:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:52.696446 | orchestrator | 2025-07-12 13:43:52 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:52.699115 | orchestrator | 2025-07-12 13:43:52 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:52.703842 | orchestrator | 2025-07-12 13:43:52 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:52.704242 | orchestrator | 2025-07-12 13:43:52 | INFO  | Task 5e0e2374-f00e-4342-a8de-ce1073e23370 is in state SUCCESS 2025-07-12 13:43:52.707736 | orchestrator | 2025-07-12 13:43:52.707798 | orchestrator | 2025-07-12 13:43:52.707812 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-07-12 13:43:52.707825 | orchestrator | 2025-07-12 13:43:52.707836 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-07-12 13:43:52.707848 | orchestrator | Saturday 12 July 2025 13:42:49 +0000 (0:00:00.328) 0:00:00.328 ********* 2025-07-12 13:43:52.707860 | orchestrator | ok: [testbed-manager] => { 2025-07-12 13:43:52.707873 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-07-12 13:43:52.707885 | orchestrator | } 2025-07-12 13:43:52.707897 | orchestrator | 2025-07-12 13:43:52.707908 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-07-12 13:43:52.707919 | orchestrator | Saturday 12 July 2025 13:42:49 +0000 (0:00:00.479) 0:00:00.807 ********* 2025-07-12 13:43:52.707930 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.707942 | orchestrator | 2025-07-12 13:43:52.707954 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-07-12 13:43:52.707965 | orchestrator | Saturday 12 July 2025 13:42:51 +0000 (0:00:01.899) 0:00:02.707 ********* 2025-07-12 13:43:52.707976 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-07-12 13:43:52.707987 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-07-12 13:43:52.707998 | orchestrator | 2025-07-12 13:43:52.708009 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-07-12 13:43:52.708020 | orchestrator | Saturday 12 July 2025 13:42:53 +0000 (0:00:01.649) 0:00:04.357 ********* 2025-07-12 13:43:52.708031 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.708042 | orchestrator | 2025-07-12 13:43:52.708053 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-07-12 13:43:52.708064 | orchestrator | Saturday 12 July 2025 13:42:55 +0000 (0:00:02.318) 0:00:06.675 ********* 2025-07-12 13:43:52.708075 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.708106 | orchestrator | 2025-07-12 13:43:52.708118 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-07-12 13:43:52.708129 | orchestrator | Saturday 12 July 2025 13:42:57 +0000 (0:00:02.033) 0:00:08.708 ********* 2025-07-12 13:43:52.708140 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-07-12 13:43:52.708151 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.708162 | orchestrator | 2025-07-12 13:43:52.708173 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-07-12 13:43:52.708184 | orchestrator | Saturday 12 July 2025 13:43:21 +0000 (0:00:23.967) 0:00:32.675 ********* 2025-07-12 13:43:52.708221 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.708242 | orchestrator | 2025-07-12 13:43:52.708261 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:43:52.708273 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:52.708286 | orchestrator | 2025-07-12 13:43:52.708298 | orchestrator | 2025-07-12 13:43:52.708310 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:43:52.708322 | orchestrator | Saturday 12 July 2025 13:43:23 +0000 (0:00:01.690) 0:00:34.365 ********* 2025-07-12 13:43:52.708364 | orchestrator | =============================================================================== 2025-07-12 13:43:52.708378 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 23.97s 2025-07-12 13:43:52.708390 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.32s 2025-07-12 13:43:52.708402 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.03s 2025-07-12 13:43:52.708414 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.90s 2025-07-12 13:43:52.708425 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.70s 2025-07-12 13:43:52.708436 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.65s 2025-07-12 13:43:52.708447 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.48s 2025-07-12 13:43:52.708457 | orchestrator | 2025-07-12 13:43:52.708468 | orchestrator | 2025-07-12 13:43:52.708479 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-07-12 13:43:52.708489 | orchestrator | 2025-07-12 13:43:52.708500 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-07-12 13:43:52.708511 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:00.387) 0:00:00.387 ********* 2025-07-12 13:43:52.708522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-07-12 13:43:52.708534 | orchestrator | 2025-07-12 13:43:52.708545 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-07-12 13:43:52.708555 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:00.633) 0:00:01.021 ********* 2025-07-12 13:43:52.708566 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-07-12 13:43:52.708585 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-07-12 13:43:52.708596 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-07-12 13:43:52.708607 | orchestrator | 2025-07-12 13:43:52.708618 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-07-12 13:43:52.708628 | orchestrator | Saturday 12 July 2025 13:42:51 +0000 (0:00:02.088) 0:00:03.109 ********* 2025-07-12 13:43:52.708639 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.708650 | orchestrator | 2025-07-12 13:43:52.708661 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-07-12 13:43:52.708672 | orchestrator | Saturday 12 July 2025 13:42:53 +0000 (0:00:02.104) 0:00:05.213 ********* 2025-07-12 13:43:52.708696 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-07-12 13:43:52.708716 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.708727 | orchestrator | 2025-07-12 13:43:52.708738 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-07-12 13:43:52.708749 | orchestrator | Saturday 12 July 2025 13:43:27 +0000 (0:00:34.206) 0:00:39.420 ********* 2025-07-12 13:43:52.708760 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.708771 | orchestrator | 2025-07-12 13:43:52.708782 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-07-12 13:43:52.708792 | orchestrator | Saturday 12 July 2025 13:43:28 +0000 (0:00:01.051) 0:00:40.472 ********* 2025-07-12 13:43:52.708803 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.708814 | orchestrator | 2025-07-12 13:43:52.708825 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-07-12 13:43:52.708836 | orchestrator | Saturday 12 July 2025 13:43:29 +0000 (0:00:00.921) 0:00:41.393 ********* 2025-07-12 13:43:52.708847 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.708864 | orchestrator | 2025-07-12 13:43:52.708881 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-07-12 13:43:52.708892 | orchestrator | Saturday 12 July 2025 13:43:31 +0000 (0:00:01.707) 0:00:43.100 ********* 2025-07-12 13:43:52.708903 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.708913 | orchestrator | 2025-07-12 13:43:52.708924 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-07-12 13:43:52.708935 | orchestrator | Saturday 12 July 2025 13:43:31 +0000 (0:00:00.704) 0:00:43.805 ********* 2025-07-12 13:43:52.708946 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.708957 | orchestrator | 2025-07-12 13:43:52.708968 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-07-12 13:43:52.708978 | orchestrator | Saturday 12 July 2025 13:43:32 +0000 (0:00:00.827) 0:00:44.633 ********* 2025-07-12 13:43:52.708989 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.709000 | orchestrator | 2025-07-12 13:43:52.709011 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:43:52.709022 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:52.709033 | orchestrator | 2025-07-12 13:43:52.709044 | orchestrator | 2025-07-12 13:43:52.709054 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:43:52.709065 | orchestrator | Saturday 12 July 2025 13:43:33 +0000 (0:00:00.473) 0:00:45.106 ********* 2025-07-12 13:43:52.709076 | orchestrator | =============================================================================== 2025-07-12 13:43:52.709086 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.21s 2025-07-12 13:43:52.709097 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.10s 2025-07-12 13:43:52.709108 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.09s 2025-07-12 13:43:52.709119 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.71s 2025-07-12 13:43:52.709129 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.05s 2025-07-12 13:43:52.709140 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.92s 2025-07-12 13:43:52.709151 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.83s 2025-07-12 13:43:52.709162 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.70s 2025-07-12 13:43:52.709172 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.63s 2025-07-12 13:43:52.709183 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.47s 2025-07-12 13:43:52.709263 | orchestrator | 2025-07-12 13:43:52.709278 | orchestrator | 2025-07-12 13:43:52.709289 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:43:52.709299 | orchestrator | 2025-07-12 13:43:52.709310 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:43:52.709329 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:00.750) 0:00:00.750 ********* 2025-07-12 13:43:52.709339 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-07-12 13:43:52.709350 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-07-12 13:43:52.709361 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-07-12 13:43:52.709372 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-07-12 13:43:52.709382 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-07-12 13:43:52.709393 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-07-12 13:43:52.709404 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-07-12 13:43:52.709415 | orchestrator | 2025-07-12 13:43:52.709425 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-07-12 13:43:52.709436 | orchestrator | 2025-07-12 13:43:52.709446 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-07-12 13:43:52.709457 | orchestrator | Saturday 12 July 2025 13:42:51 +0000 (0:00:03.302) 0:00:04.053 ********* 2025-07-12 13:43:52.709486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:43:52.709506 | orchestrator | 2025-07-12 13:43:52.709518 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-07-12 13:43:52.709528 | orchestrator | Saturday 12 July 2025 13:42:54 +0000 (0:00:02.666) 0:00:06.719 ********* 2025-07-12 13:43:52.709539 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.709550 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:52.709561 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:52.709571 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:52.709582 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:52.709600 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:52.709611 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:52.709620 | orchestrator | 2025-07-12 13:43:52.709630 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-07-12 13:43:52.709640 | orchestrator | Saturday 12 July 2025 13:42:56 +0000 (0:00:02.033) 0:00:08.752 ********* 2025-07-12 13:43:52.709649 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.709659 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:52.709668 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:52.709678 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:52.709687 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:52.709696 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:52.709706 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:52.709715 | orchestrator | 2025-07-12 13:43:52.709725 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-07-12 13:43:52.709735 | orchestrator | Saturday 12 July 2025 13:43:00 +0000 (0:00:04.111) 0:00:12.863 ********* 2025-07-12 13:43:52.709744 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.709754 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:52.709763 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:52.709772 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:52.709782 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:52.709791 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:52.709801 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:52.709810 | orchestrator | 2025-07-12 13:43:52.709820 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-07-12 13:43:52.709829 | orchestrator | Saturday 12 July 2025 13:43:03 +0000 (0:00:03.273) 0:00:16.137 ********* 2025-07-12 13:43:52.709839 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.709848 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:52.709858 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:52.709867 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:52.709877 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:52.709892 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:52.709901 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:52.709911 | orchestrator | 2025-07-12 13:43:52.709921 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-07-12 13:43:52.709930 | orchestrator | Saturday 12 July 2025 13:43:14 +0000 (0:00:10.755) 0:00:26.892 ********* 2025-07-12 13:43:52.709940 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:52.709949 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:52.709958 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:52.709968 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:52.709977 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:52.709987 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:52.709996 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.710006 | orchestrator | 2025-07-12 13:43:52.710060 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-07-12 13:43:52.710073 | orchestrator | Saturday 12 July 2025 13:43:30 +0000 (0:00:15.811) 0:00:42.704 ********* 2025-07-12 13:43:52.710084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:43:52.710118 | orchestrator | 2025-07-12 13:43:52.710128 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-07-12 13:43:52.710138 | orchestrator | Saturday 12 July 2025 13:43:31 +0000 (0:00:01.478) 0:00:44.182 ********* 2025-07-12 13:43:52.710147 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-07-12 13:43:52.710157 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-07-12 13:43:52.710167 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-07-12 13:43:52.710176 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-07-12 13:43:52.710186 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-07-12 13:43:52.710219 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-07-12 13:43:52.710230 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-07-12 13:43:52.710239 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-07-12 13:43:52.710249 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-07-12 13:43:52.710259 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-07-12 13:43:52.710268 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-07-12 13:43:52.710278 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-07-12 13:43:52.710287 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-07-12 13:43:52.710297 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-07-12 13:43:52.710306 | orchestrator | 2025-07-12 13:43:52.710316 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-07-12 13:43:52.710325 | orchestrator | Saturday 12 July 2025 13:43:36 +0000 (0:00:05.047) 0:00:49.229 ********* 2025-07-12 13:43:52.710335 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.710345 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:52.710354 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:52.710364 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:52.710373 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:52.710383 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:52.710392 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:52.710402 | orchestrator | 2025-07-12 13:43:52.710416 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-07-12 13:43:52.710427 | orchestrator | Saturday 12 July 2025 13:43:37 +0000 (0:00:01.077) 0:00:50.306 ********* 2025-07-12 13:43:52.710436 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.710446 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:52.710455 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:52.710465 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:52.710475 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:52.710494 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:52.710504 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:52.710513 | orchestrator | 2025-07-12 13:43:52.710523 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-07-12 13:43:52.710540 | orchestrator | Saturday 12 July 2025 13:43:39 +0000 (0:00:01.559) 0:00:51.866 ********* 2025-07-12 13:43:52.710550 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.710560 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:52.710570 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:52.710579 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:52.710589 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:52.710599 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:52.710608 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:52.710618 | orchestrator | 2025-07-12 13:43:52.710627 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-07-12 13:43:52.710637 | orchestrator | Saturday 12 July 2025 13:43:41 +0000 (0:00:01.630) 0:00:53.496 ********* 2025-07-12 13:43:52.710647 | orchestrator | ok: [testbed-manager] 2025-07-12 13:43:52.710657 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:43:52.710666 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:43:52.710677 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:43:52.710694 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:43:52.710710 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:43:52.710727 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:43:52.710744 | orchestrator | 2025-07-12 13:43:52.710760 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-07-12 13:43:52.710775 | orchestrator | Saturday 12 July 2025 13:43:43 +0000 (0:00:02.361) 0:00:55.857 ********* 2025-07-12 13:43:52.710785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-07-12 13:43:52.710796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:43:52.710806 | orchestrator | 2025-07-12 13:43:52.710816 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-07-12 13:43:52.710826 | orchestrator | Saturday 12 July 2025 13:43:45 +0000 (0:00:01.732) 0:00:57.590 ********* 2025-07-12 13:43:52.710838 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.710853 | orchestrator | 2025-07-12 13:43:52.710868 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-07-12 13:43:52.710883 | orchestrator | Saturday 12 July 2025 13:43:47 +0000 (0:00:02.359) 0:00:59.949 ********* 2025-07-12 13:43:52.710899 | orchestrator | changed: [testbed-manager] 2025-07-12 13:43:52.710914 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:43:52.710930 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:43:52.710947 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:43:52.710963 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:43:52.710980 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:43:52.710996 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:43:52.711011 | orchestrator | 2025-07-12 13:43:52.711024 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:43:52.711034 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:52.711043 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:52.711053 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:52.711063 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:52.711073 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:52.711090 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:52.711100 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:43:52.711110 | orchestrator | 2025-07-12 13:43:52.711119 | orchestrator | 2025-07-12 13:43:52.711129 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:43:52.711139 | orchestrator | Saturday 12 July 2025 13:43:50 +0000 (0:00:03.192) 0:01:03.142 ********* 2025-07-12 13:43:52.711164 | orchestrator | =============================================================================== 2025-07-12 13:43:52.711183 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 15.81s 2025-07-12 13:43:52.711214 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.76s 2025-07-12 13:43:52.711226 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.05s 2025-07-12 13:43:52.711236 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.11s 2025-07-12 13:43:52.711245 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.29s 2025-07-12 13:43:52.711265 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.27s 2025-07-12 13:43:52.711275 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.19s 2025-07-12 13:43:52.711285 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.67s 2025-07-12 13:43:52.711294 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.36s 2025-07-12 13:43:52.711304 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.36s 2025-07-12 13:43:52.711313 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.03s 2025-07-12 13:43:52.711330 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.73s 2025-07-12 13:43:52.711340 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.63s 2025-07-12 13:43:52.711350 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.56s 2025-07-12 13:43:52.711359 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.48s 2025-07-12 13:43:52.711369 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.08s 2025-07-12 13:43:52.711379 | orchestrator | 2025-07-12 13:43:52 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:52.711389 | orchestrator | 2025-07-12 13:43:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:55.751113 | orchestrator | 2025-07-12 13:43:55 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:55.751237 | orchestrator | 2025-07-12 13:43:55 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:55.751251 | orchestrator | 2025-07-12 13:43:55 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:55.751330 | orchestrator | 2025-07-12 13:43:55 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:55.751813 | orchestrator | 2025-07-12 13:43:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:43:58.799101 | orchestrator | 2025-07-12 13:43:58 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:43:58.801784 | orchestrator | 2025-07-12 13:43:58 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:43:58.802625 | orchestrator | 2025-07-12 13:43:58 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:43:58.803393 | orchestrator | 2025-07-12 13:43:58 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:43:58.803417 | orchestrator | 2025-07-12 13:43:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:01.842457 | orchestrator | 2025-07-12 13:44:01 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:01.843404 | orchestrator | 2025-07-12 13:44:01 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:01.844817 | orchestrator | 2025-07-12 13:44:01 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:01.846092 | orchestrator | 2025-07-12 13:44:01 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:01.846118 | orchestrator | 2025-07-12 13:44:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:04.883363 | orchestrator | 2025-07-12 13:44:04 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:04.883465 | orchestrator | 2025-07-12 13:44:04 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:04.884208 | orchestrator | 2025-07-12 13:44:04 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:04.885083 | orchestrator | 2025-07-12 13:44:04 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:04.885113 | orchestrator | 2025-07-12 13:44:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:07.927029 | orchestrator | 2025-07-12 13:44:07 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:07.929928 | orchestrator | 2025-07-12 13:44:07 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:07.932847 | orchestrator | 2025-07-12 13:44:07 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:07.934885 | orchestrator | 2025-07-12 13:44:07 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:07.934909 | orchestrator | 2025-07-12 13:44:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:10.975394 | orchestrator | 2025-07-12 13:44:10 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:10.975804 | orchestrator | 2025-07-12 13:44:10 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:10.976267 | orchestrator | 2025-07-12 13:44:10 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:10.976998 | orchestrator | 2025-07-12 13:44:10 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:10.977019 | orchestrator | 2025-07-12 13:44:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:14.012841 | orchestrator | 2025-07-12 13:44:14 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:14.013412 | orchestrator | 2025-07-12 13:44:14 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:14.013899 | orchestrator | 2025-07-12 13:44:14 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:14.014951 | orchestrator | 2025-07-12 13:44:14 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:14.014982 | orchestrator | 2025-07-12 13:44:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:17.080283 | orchestrator | 2025-07-12 13:44:17 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:17.084823 | orchestrator | 2025-07-12 13:44:17 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:17.087461 | orchestrator | 2025-07-12 13:44:17 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:17.093135 | orchestrator | 2025-07-12 13:44:17 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:17.093200 | orchestrator | 2025-07-12 13:44:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:20.151432 | orchestrator | 2025-07-12 13:44:20 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:20.153945 | orchestrator | 2025-07-12 13:44:20 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:20.155136 | orchestrator | 2025-07-12 13:44:20 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:20.158119 | orchestrator | 2025-07-12 13:44:20 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:20.158217 | orchestrator | 2025-07-12 13:44:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:23.193318 | orchestrator | 2025-07-12 13:44:23 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:23.195357 | orchestrator | 2025-07-12 13:44:23 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:23.197004 | orchestrator | 2025-07-12 13:44:23 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:23.198523 | orchestrator | 2025-07-12 13:44:23 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:23.198557 | orchestrator | 2025-07-12 13:44:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:26.255065 | orchestrator | 2025-07-12 13:44:26 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:26.257745 | orchestrator | 2025-07-12 13:44:26 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:26.259399 | orchestrator | 2025-07-12 13:44:26 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:26.261001 | orchestrator | 2025-07-12 13:44:26 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:26.261381 | orchestrator | 2025-07-12 13:44:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:29.325545 | orchestrator | 2025-07-12 13:44:29 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:29.326648 | orchestrator | 2025-07-12 13:44:29 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:29.327444 | orchestrator | 2025-07-12 13:44:29 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:29.328362 | orchestrator | 2025-07-12 13:44:29 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:29.328396 | orchestrator | 2025-07-12 13:44:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:32.386773 | orchestrator | 2025-07-12 13:44:32 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:32.387600 | orchestrator | 2025-07-12 13:44:32 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:32.387632 | orchestrator | 2025-07-12 13:44:32 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:32.388720 | orchestrator | 2025-07-12 13:44:32 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:32.388795 | orchestrator | 2025-07-12 13:44:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:35.453655 | orchestrator | 2025-07-12 13:44:35 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:35.456513 | orchestrator | 2025-07-12 13:44:35 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:35.457810 | orchestrator | 2025-07-12 13:44:35 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:35.458912 | orchestrator | 2025-07-12 13:44:35 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:35.459376 | orchestrator | 2025-07-12 13:44:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:38.498749 | orchestrator | 2025-07-12 13:44:38 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:38.500728 | orchestrator | 2025-07-12 13:44:38 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:38.501624 | orchestrator | 2025-07-12 13:44:38 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:38.505052 | orchestrator | 2025-07-12 13:44:38 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:38.505081 | orchestrator | 2025-07-12 13:44:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:41.550669 | orchestrator | 2025-07-12 13:44:41 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:41.552847 | orchestrator | 2025-07-12 13:44:41 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:41.554444 | orchestrator | 2025-07-12 13:44:41 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:41.556855 | orchestrator | 2025-07-12 13:44:41 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:41.556962 | orchestrator | 2025-07-12 13:44:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:44.596325 | orchestrator | 2025-07-12 13:44:44 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:44.596429 | orchestrator | 2025-07-12 13:44:44 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:44.597140 | orchestrator | 2025-07-12 13:44:44 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:44.600842 | orchestrator | 2025-07-12 13:44:44 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state STARTED 2025-07-12 13:44:44.600866 | orchestrator | 2025-07-12 13:44:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:47.654159 | orchestrator | 2025-07-12 13:44:47 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:47.656291 | orchestrator | 2025-07-12 13:44:47 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:47.658265 | orchestrator | 2025-07-12 13:44:47 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:47.659037 | orchestrator | 2025-07-12 13:44:47 | INFO  | Task 226e4f9f-8b2f-4474-b3f7-1dfd5a93a5ea is in state SUCCESS 2025-07-12 13:44:47.659548 | orchestrator | 2025-07-12 13:44:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:50.713453 | orchestrator | 2025-07-12 13:44:50 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:50.716587 | orchestrator | 2025-07-12 13:44:50 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:50.718432 | orchestrator | 2025-07-12 13:44:50 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:50.718460 | orchestrator | 2025-07-12 13:44:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:53.799270 | orchestrator | 2025-07-12 13:44:53 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:53.801555 | orchestrator | 2025-07-12 13:44:53 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:53.801652 | orchestrator | 2025-07-12 13:44:53 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:53.802006 | orchestrator | 2025-07-12 13:44:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:56.870795 | orchestrator | 2025-07-12 13:44:56 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:56.871683 | orchestrator | 2025-07-12 13:44:56 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:56.872670 | orchestrator | 2025-07-12 13:44:56 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:56.873165 | orchestrator | 2025-07-12 13:44:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:44:59.906915 | orchestrator | 2025-07-12 13:44:59 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:44:59.908808 | orchestrator | 2025-07-12 13:44:59 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:44:59.910130 | orchestrator | 2025-07-12 13:44:59 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:44:59.910537 | orchestrator | 2025-07-12 13:44:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:02.949670 | orchestrator | 2025-07-12 13:45:02 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:02.949775 | orchestrator | 2025-07-12 13:45:02 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:02.950593 | orchestrator | 2025-07-12 13:45:02 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:02.950624 | orchestrator | 2025-07-12 13:45:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:05.980392 | orchestrator | 2025-07-12 13:45:05 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:05.981113 | orchestrator | 2025-07-12 13:45:05 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:05.981714 | orchestrator | 2025-07-12 13:45:05 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:05.981738 | orchestrator | 2025-07-12 13:45:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:09.026545 | orchestrator | 2025-07-12 13:45:09 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:09.028639 | orchestrator | 2025-07-12 13:45:09 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:09.031983 | orchestrator | 2025-07-12 13:45:09 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:09.032011 | orchestrator | 2025-07-12 13:45:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:12.079483 | orchestrator | 2025-07-12 13:45:12 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:12.079568 | orchestrator | 2025-07-12 13:45:12 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:12.080897 | orchestrator | 2025-07-12 13:45:12 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:12.080922 | orchestrator | 2025-07-12 13:45:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:15.138935 | orchestrator | 2025-07-12 13:45:15 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:15.143582 | orchestrator | 2025-07-12 13:45:15 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:15.147576 | orchestrator | 2025-07-12 13:45:15 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:15.147630 | orchestrator | 2025-07-12 13:45:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:18.181080 | orchestrator | 2025-07-12 13:45:18 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:18.182623 | orchestrator | 2025-07-12 13:45:18 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:18.186832 | orchestrator | 2025-07-12 13:45:18 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:18.186866 | orchestrator | 2025-07-12 13:45:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:21.222293 | orchestrator | 2025-07-12 13:45:21 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:21.224014 | orchestrator | 2025-07-12 13:45:21 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:21.225137 | orchestrator | 2025-07-12 13:45:21 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:21.225241 | orchestrator | 2025-07-12 13:45:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:24.267135 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:24.267956 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:24.268608 | orchestrator | 2025-07-12 13:45:24 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:24.268633 | orchestrator | 2025-07-12 13:45:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:27.303934 | orchestrator | 2025-07-12 13:45:27 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:27.305891 | orchestrator | 2025-07-12 13:45:27 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:27.310203 | orchestrator | 2025-07-12 13:45:27 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:27.310255 | orchestrator | 2025-07-12 13:45:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:30.358859 | orchestrator | 2025-07-12 13:45:30 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:30.358953 | orchestrator | 2025-07-12 13:45:30 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:30.358967 | orchestrator | 2025-07-12 13:45:30 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:30.358979 | orchestrator | 2025-07-12 13:45:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:33.401591 | orchestrator | 2025-07-12 13:45:33 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:33.405439 | orchestrator | 2025-07-12 13:45:33 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:33.405468 | orchestrator | 2025-07-12 13:45:33 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:33.405482 | orchestrator | 2025-07-12 13:45:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:36.451445 | orchestrator | 2025-07-12 13:45:36 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state STARTED 2025-07-12 13:45:36.452880 | orchestrator | 2025-07-12 13:45:36 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:36.455943 | orchestrator | 2025-07-12 13:45:36 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:36.455981 | orchestrator | 2025-07-12 13:45:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:39.502889 | orchestrator | 2025-07-12 13:45:39 | INFO  | Task d8cd1e88-fc12-404a-af78-d43f4adf0c21 is in state SUCCESS 2025-07-12 13:45:39.504804 | orchestrator | 2025-07-12 13:45:39.504851 | orchestrator | 2025-07-12 13:45:39.504863 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-07-12 13:45:39.504875 | orchestrator | 2025-07-12 13:45:39.504887 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-07-12 13:45:39.504898 | orchestrator | Saturday 12 July 2025 13:43:10 +0000 (0:00:00.189) 0:00:00.189 ********* 2025-07-12 13:45:39.504910 | orchestrator | ok: [testbed-manager] 2025-07-12 13:45:39.504923 | orchestrator | 2025-07-12 13:45:39.504935 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-07-12 13:45:39.504946 | orchestrator | Saturday 12 July 2025 13:43:11 +0000 (0:00:00.844) 0:00:01.034 ********* 2025-07-12 13:45:39.504958 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-07-12 13:45:39.504969 | orchestrator | 2025-07-12 13:45:39.504980 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-07-12 13:45:39.504992 | orchestrator | Saturday 12 July 2025 13:43:11 +0000 (0:00:00.628) 0:00:01.662 ********* 2025-07-12 13:45:39.505037 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:39.505049 | orchestrator | 2025-07-12 13:45:39.505060 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-07-12 13:45:39.505072 | orchestrator | Saturday 12 July 2025 13:43:13 +0000 (0:00:01.352) 0:00:03.015 ********* 2025-07-12 13:45:39.505083 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-07-12 13:45:39.505122 | orchestrator | ok: [testbed-manager] 2025-07-12 13:45:39.505134 | orchestrator | 2025-07-12 13:45:39.505145 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-07-12 13:45:39.505156 | orchestrator | Saturday 12 July 2025 13:44:40 +0000 (0:01:27.862) 0:01:30.878 ********* 2025-07-12 13:45:39.505168 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:39.505179 | orchestrator | 2025-07-12 13:45:39.505190 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:45:39.505202 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:45:39.505215 | orchestrator | 2025-07-12 13:45:39.505226 | orchestrator | 2025-07-12 13:45:39.505238 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:45:39.505249 | orchestrator | Saturday 12 July 2025 13:44:44 +0000 (0:00:03.638) 0:01:34.516 ********* 2025-07-12 13:45:39.505267 | orchestrator | =============================================================================== 2025-07-12 13:45:39.505279 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 87.86s 2025-07-12 13:45:39.505290 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.64s 2025-07-12 13:45:39.505301 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.35s 2025-07-12 13:45:39.505312 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.84s 2025-07-12 13:45:39.505324 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.63s 2025-07-12 13:45:39.505335 | orchestrator | 2025-07-12 13:45:39.505346 | orchestrator | 2025-07-12 13:45:39.505357 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-07-12 13:45:39.505369 | orchestrator | 2025-07-12 13:45:39.505381 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 13:45:39.505414 | orchestrator | Saturday 12 July 2025 13:42:41 +0000 (0:00:00.269) 0:00:00.269 ********* 2025-07-12 13:45:39.505428 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:45:39.505441 | orchestrator | 2025-07-12 13:45:39.505454 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-07-12 13:45:39.505466 | orchestrator | Saturday 12 July 2025 13:42:43 +0000 (0:00:01.335) 0:00:01.604 ********* 2025-07-12 13:45:39.505479 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:39.505491 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:39.505502 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:39.505515 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:39.505527 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:39.505539 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:39.505552 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:39.505563 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:39.505575 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:39.505587 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:39.505600 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:39.505612 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:39.505624 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:39.505637 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:39.505649 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:39.505662 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:39.505686 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 13:45:39.505698 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:39.505711 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:39.505723 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 13:45:39.505735 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 13:45:39.505747 | orchestrator | 2025-07-12 13:45:39.505759 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 13:45:39.505770 | orchestrator | Saturday 12 July 2025 13:42:47 +0000 (0:00:04.544) 0:00:06.149 ********* 2025-07-12 13:45:39.505781 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:45:39.505793 | orchestrator | 2025-07-12 13:45:39.505804 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-07-12 13:45:39.505815 | orchestrator | Saturday 12 July 2025 13:42:49 +0000 (0:00:01.459) 0:00:07.609 ********* 2025-07-12 13:45:39.505830 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.505861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.505874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.505886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.505897 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.505922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.505934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.505946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.505969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.505981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.505993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.506102 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.506255 | orchestrator | 2025-07-12 13:45:39.506267 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-07-12 13:45:39.506278 | orchestrator | Saturday 12 July 2025 13:42:54 +0000 (0:00:05.337) 0:00:12.946 ********* 2025-07-12 13:45:39.506296 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506308 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506335 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506363 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:45:39.506375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506399 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:45:39.506411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506508 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:45:39.506520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506560 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:45:39.506571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506607 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:45:39.506618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506667 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:45:39.506678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506718 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:45:39.506729 | orchestrator | 2025-07-12 13:45:39.506741 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-07-12 13:45:39.506752 | orchestrator | Saturday 12 July 2025 13:42:56 +0000 (0:00:01.592) 0:00:14.538 ********* 2025-07-12 13:45:39.506763 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506775 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506853 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:45:39.506865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.506970 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:45:39.506982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.506994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507097 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:45:39.507109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.507129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507153 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:45:39.507165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.507182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.507206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507217 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:45:39.507229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507265 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:45:39.507277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 13:45:39.507288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.507316 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:45:39.507328 | orchestrator | 2025-07-12 13:45:39.507339 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-07-12 13:45:39.507350 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:02.807) 0:00:17.346 ********* 2025-07-12 13:45:39.507361 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:45:39.507372 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:45:39.507383 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:45:39.507394 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:45:39.507405 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:45:39.507416 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:45:39.507427 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:45:39.507438 | orchestrator | 2025-07-12 13:45:39.507449 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-07-12 13:45:39.507461 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:00.898) 0:00:18.244 ********* 2025-07-12 13:45:39.507471 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:45:39.507483 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:45:39.507493 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:45:39.507504 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:45:39.507516 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:45:39.507534 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:45:39.507545 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:45:39.507556 | orchestrator | 2025-07-12 13:45:39.507567 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-07-12 13:45:39.507578 | orchestrator | Saturday 12 July 2025 13:43:01 +0000 (0:00:01.375) 0:00:19.620 ********* 2025-07-12 13:45:39.507590 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.507602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.507625 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.507650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.507669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.507681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.507702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.507726 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.507923 | orchestrator | 2025-07-12 13:45:39.507934 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-07-12 13:45:39.507946 | orchestrator | Saturday 12 July 2025 13:43:06 +0000 (0:00:05.184) 0:00:24.805 ********* 2025-07-12 13:45:39.507958 | orchestrator | [WARNING]: Skipped 2025-07-12 13:45:39.507969 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-07-12 13:45:39.507987 | orchestrator | to this access issue: 2025-07-12 13:45:39.508025 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-07-12 13:45:39.508045 | orchestrator | directory 2025-07-12 13:45:39.508065 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:45:39.508083 | orchestrator | 2025-07-12 13:45:39.508101 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-07-12 13:45:39.508113 | orchestrator | Saturday 12 July 2025 13:43:08 +0000 (0:00:01.675) 0:00:26.480 ********* 2025-07-12 13:45:39.508124 | orchestrator | [WARNING]: Skipped 2025-07-12 13:45:39.508135 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-07-12 13:45:39.508146 | orchestrator | to this access issue: 2025-07-12 13:45:39.508157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-07-12 13:45:39.508168 | orchestrator | directory 2025-07-12 13:45:39.508179 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:45:39.508190 | orchestrator | 2025-07-12 13:45:39.508201 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-07-12 13:45:39.508211 | orchestrator | Saturday 12 July 2025 13:43:09 +0000 (0:00:00.973) 0:00:27.454 ********* 2025-07-12 13:45:39.508222 | orchestrator | [WARNING]: Skipped 2025-07-12 13:45:39.508233 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-07-12 13:45:39.508244 | orchestrator | to this access issue: 2025-07-12 13:45:39.508255 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-07-12 13:45:39.508266 | orchestrator | directory 2025-07-12 13:45:39.508277 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:45:39.508288 | orchestrator | 2025-07-12 13:45:39.508299 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-07-12 13:45:39.508310 | orchestrator | Saturday 12 July 2025 13:43:09 +0000 (0:00:00.693) 0:00:28.147 ********* 2025-07-12 13:45:39.508321 | orchestrator | [WARNING]: Skipped 2025-07-12 13:45:39.508332 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-07-12 13:45:39.508343 | orchestrator | to this access issue: 2025-07-12 13:45:39.508353 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-07-12 13:45:39.508364 | orchestrator | directory 2025-07-12 13:45:39.508375 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 13:45:39.508386 | orchestrator | 2025-07-12 13:45:39.508397 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-07-12 13:45:39.508408 | orchestrator | Saturday 12 July 2025 13:43:10 +0000 (0:00:00.614) 0:00:28.761 ********* 2025-07-12 13:45:39.508437 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:39.508448 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:39.508459 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:39.508470 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:39.508481 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:39.508492 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:39.508502 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:39.508513 | orchestrator | 2025-07-12 13:45:39.508524 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-07-12 13:45:39.508545 | orchestrator | Saturday 12 July 2025 13:43:13 +0000 (0:00:03.453) 0:00:32.215 ********* 2025-07-12 13:45:39.508557 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:39.508568 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:39.508579 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:39.508597 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:39.508608 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:39.508627 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:39.508638 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 13:45:39.508649 | orchestrator | 2025-07-12 13:45:39.508660 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-07-12 13:45:39.508671 | orchestrator | Saturday 12 July 2025 13:43:16 +0000 (0:00:02.919) 0:00:35.134 ********* 2025-07-12 13:45:39.508682 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:39.508693 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:39.508705 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:39.508715 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:39.508726 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:39.508737 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:39.508748 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:39.508759 | orchestrator | 2025-07-12 13:45:39.508770 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-07-12 13:45:39.508781 | orchestrator | Saturday 12 July 2025 13:43:19 +0000 (0:00:02.755) 0:00:37.890 ********* 2025-07-12 13:45:39.508792 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.508810 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.508822 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.508834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.508846 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.508872 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.508885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.508896 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.508913 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.508925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.508936 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.508948 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.508959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.508983 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.508996 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.509033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.509049 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.509061 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.509919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:45:39.509954 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.509978 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.509989 | orchestrator | 2025-07-12 13:45:39.510102 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-07-12 13:45:39.510119 | orchestrator | Saturday 12 July 2025 13:43:22 +0000 (0:00:02.651) 0:00:40.541 ********* 2025-07-12 13:45:39.510131 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:39.510143 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:39.510154 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:39.510165 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:39.510176 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:39.510187 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:39.510198 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 13:45:39.510207 | orchestrator | 2025-07-12 13:45:39.510217 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-07-12 13:45:39.510227 | orchestrator | Saturday 12 July 2025 13:43:24 +0000 (0:00:02.681) 0:00:43.223 ********* 2025-07-12 13:45:39.510237 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:39.510247 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:39.510257 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:39.510266 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:39.510276 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:39.510286 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:39.510320 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 13:45:39.510331 | orchestrator | 2025-07-12 13:45:39.510341 | orchestrator | TASK [common : Check common containers] **************************************** 2025-07-12 13:45:39.510351 | orchestrator | Saturday 12 July 2025 13:43:28 +0000 (0:00:03.345) 0:00:46.569 ********* 2025-07-12 13:45:39.510367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.510379 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.510400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.510418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.510428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.510438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.510449 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 13:45:39.510473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510530 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510627 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:45:39.510650 | orchestrator | 2025-07-12 13:45:39.510661 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-07-12 13:45:39.510672 | orchestrator | Saturday 12 July 2025 13:43:31 +0000 (0:00:03.502) 0:00:50.072 ********* 2025-07-12 13:45:39.510682 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:39.510692 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:39.510702 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:39.510712 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:39.510722 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:39.510732 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:39.510741 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:39.510751 | orchestrator | 2025-07-12 13:45:39.510761 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-07-12 13:45:39.510771 | orchestrator | Saturday 12 July 2025 13:43:33 +0000 (0:00:01.913) 0:00:51.986 ********* 2025-07-12 13:45:39.510780 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:39.510790 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:39.510800 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:39.510809 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:39.510819 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:39.510829 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:39.510838 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:39.510848 | orchestrator | 2025-07-12 13:45:39.510858 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:39.510868 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:01.467) 0:00:53.453 ********* 2025-07-12 13:45:39.510877 | orchestrator | 2025-07-12 13:45:39.510888 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:39.510903 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.208) 0:00:53.661 ********* 2025-07-12 13:45:39.510919 | orchestrator | 2025-07-12 13:45:39.510943 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:39.510959 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.055) 0:00:53.717 ********* 2025-07-12 13:45:39.510974 | orchestrator | 2025-07-12 13:45:39.510984 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:39.510994 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.082) 0:00:53.799 ********* 2025-07-12 13:45:39.511063 | orchestrator | 2025-07-12 13:45:39.511074 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:39.511084 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.062) 0:00:53.862 ********* 2025-07-12 13:45:39.511094 | orchestrator | 2025-07-12 13:45:39.511109 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:39.511119 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.053) 0:00:53.916 ********* 2025-07-12 13:45:39.511129 | orchestrator | 2025-07-12 13:45:39.511139 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 13:45:39.511148 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.062) 0:00:53.978 ********* 2025-07-12 13:45:39.511158 | orchestrator | 2025-07-12 13:45:39.511168 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-07-12 13:45:39.511178 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.093) 0:00:54.072 ********* 2025-07-12 13:45:39.511188 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:39.511198 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:39.511208 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:39.511217 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:39.511227 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:39.511237 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:39.511247 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:39.511257 | orchestrator | 2025-07-12 13:45:39.511267 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-07-12 13:45:39.511276 | orchestrator | Saturday 12 July 2025 13:44:17 +0000 (0:00:41.952) 0:01:36.025 ********* 2025-07-12 13:45:39.511286 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:39.511302 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:39.511313 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:39.511323 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:39.511332 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:39.511342 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:39.511352 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:39.511362 | orchestrator | 2025-07-12 13:45:39.511372 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-07-12 13:45:39.511381 | orchestrator | Saturday 12 July 2025 13:45:26 +0000 (0:01:08.853) 0:02:44.878 ********* 2025-07-12 13:45:39.511391 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:45:39.511401 | orchestrator | ok: [testbed-manager] 2025-07-12 13:45:39.511411 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:45:39.511421 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:45:39.511431 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:45:39.511441 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:45:39.511450 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:45:39.511460 | orchestrator | 2025-07-12 13:45:39.511469 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-07-12 13:45:39.511477 | orchestrator | Saturday 12 July 2025 13:45:28 +0000 (0:00:02.324) 0:02:47.203 ********* 2025-07-12 13:45:39.511485 | orchestrator | changed: [testbed-manager] 2025-07-12 13:45:39.511493 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:45:39.511501 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:45:39.511508 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:45:39.511516 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:45:39.511524 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:45:39.511532 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:45:39.511540 | orchestrator | 2025-07-12 13:45:39.511548 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:45:39.511563 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:39.511572 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:39.511580 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:39.511588 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:39.511596 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:39.511604 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:39.511612 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 13:45:39.511620 | orchestrator | 2025-07-12 13:45:39.511628 | orchestrator | 2025-07-12 13:45:39.511637 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:45:39.511645 | orchestrator | Saturday 12 July 2025 13:45:38 +0000 (0:00:09.654) 0:02:56.857 ********* 2025-07-12 13:45:39.511653 | orchestrator | =============================================================================== 2025-07-12 13:45:39.511661 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 68.85s 2025-07-12 13:45:39.511669 | orchestrator | common : Restart fluentd container ------------------------------------- 41.95s 2025-07-12 13:45:39.511677 | orchestrator | common : Restart cron container ----------------------------------------- 9.65s 2025-07-12 13:45:39.511685 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.34s 2025-07-12 13:45:39.511693 | orchestrator | common : Copying over config.json files for services -------------------- 5.19s 2025-07-12 13:45:39.511701 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.54s 2025-07-12 13:45:39.511708 | orchestrator | common : Check common containers ---------------------------------------- 3.50s 2025-07-12 13:45:39.511716 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.45s 2025-07-12 13:45:39.511724 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.35s 2025-07-12 13:45:39.511732 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.92s 2025-07-12 13:45:39.511744 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.81s 2025-07-12 13:45:39.511752 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.76s 2025-07-12 13:45:39.511760 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.68s 2025-07-12 13:45:39.511768 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.65s 2025-07-12 13:45:39.511776 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.32s 2025-07-12 13:45:39.511784 | orchestrator | common : Creating log volume -------------------------------------------- 1.91s 2025-07-12 13:45:39.511792 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.67s 2025-07-12 13:45:39.511800 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.59s 2025-07-12 13:45:39.511808 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.47s 2025-07-12 13:45:39.511816 | orchestrator | common : include_tasks -------------------------------------------------- 1.46s 2025-07-12 13:45:39.511827 | orchestrator | 2025-07-12 13:45:39 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:39.511840 | orchestrator | 2025-07-12 13:45:39 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:39.511849 | orchestrator | 2025-07-12 13:45:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:42.563639 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:45:42.564732 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:42.565148 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:42.567672 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task a3b2878f-825e-4708-8030-29f6b22f0e3b is in state STARTED 2025-07-12 13:45:42.568346 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:45:42.569142 | orchestrator | 2025-07-12 13:45:42 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:45:42.569168 | orchestrator | 2025-07-12 13:45:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:45.607550 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:45:45.607668 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:45.607960 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:45.609842 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task a3b2878f-825e-4708-8030-29f6b22f0e3b is in state STARTED 2025-07-12 13:45:45.610298 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:45:45.611021 | orchestrator | 2025-07-12 13:45:45 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:45:45.611044 | orchestrator | 2025-07-12 13:45:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:48.641435 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:45:48.641657 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:48.642423 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:48.642870 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task a3b2878f-825e-4708-8030-29f6b22f0e3b is in state STARTED 2025-07-12 13:45:48.644119 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:45:48.644707 | orchestrator | 2025-07-12 13:45:48 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:45:48.644729 | orchestrator | 2025-07-12 13:45:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:51.673908 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:45:51.675052 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:51.678268 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:51.680129 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task a3b2878f-825e-4708-8030-29f6b22f0e3b is in state STARTED 2025-07-12 13:45:51.681598 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:45:51.691856 | orchestrator | 2025-07-12 13:45:51 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:45:51.693624 | orchestrator | 2025-07-12 13:45:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:54.722298 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:45:54.722382 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:54.726135 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:54.726745 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task a3b2878f-825e-4708-8030-29f6b22f0e3b is in state STARTED 2025-07-12 13:45:54.727728 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:45:54.728407 | orchestrator | 2025-07-12 13:45:54 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:45:54.728500 | orchestrator | 2025-07-12 13:45:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:45:57.766291 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:45:57.766861 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:45:57.767614 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:45:57.769076 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task a3b2878f-825e-4708-8030-29f6b22f0e3b is in state STARTED 2025-07-12 13:45:57.771408 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:45:57.771432 | orchestrator | 2025-07-12 13:45:57 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:45:57.771443 | orchestrator | 2025-07-12 13:45:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:00.811951 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:00.812083 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:46:00.812097 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:00.812109 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task a3b2878f-825e-4708-8030-29f6b22f0e3b is in state SUCCESS 2025-07-12 13:46:00.812187 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:00.812765 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:00.817403 | orchestrator | 2025-07-12 13:46:00 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:46:00.817433 | orchestrator | 2025-07-12 13:46:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:03.862004 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:03.862181 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:46:03.865332 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:03.865558 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:03.866264 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:03.867107 | orchestrator | 2025-07-12 13:46:03 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:46:03.867132 | orchestrator | 2025-07-12 13:46:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:06.905399 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:06.906802 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:46:06.907433 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:06.910623 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:06.911922 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:06.915013 | orchestrator | 2025-07-12 13:46:06 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:46:06.915050 | orchestrator | 2025-07-12 13:46:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:09.938986 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:09.939551 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:46:09.940574 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:09.943522 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:09.944180 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:09.944848 | orchestrator | 2025-07-12 13:46:09 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:46:09.946204 | orchestrator | 2025-07-12 13:46:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:12.976204 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:12.976911 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:46:12.978091 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:12.979199 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:12.980976 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:12.983818 | orchestrator | 2025-07-12 13:46:12 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state STARTED 2025-07-12 13:46:12.983843 | orchestrator | 2025-07-12 13:46:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:16.017371 | orchestrator | 2025-07-12 13:46:16 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:16.021644 | orchestrator | 2025-07-12 13:46:16 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state STARTED 2025-07-12 13:46:16.022795 | orchestrator | 2025-07-12 13:46:16 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:16.025415 | orchestrator | 2025-07-12 13:46:16 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:16.025437 | orchestrator | 2025-07-12 13:46:16 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:16.026313 | orchestrator | 2025-07-12 13:46:16 | INFO  | Task 2ba1939b-6754-49c2-b450-b27a54cb1e49 is in state SUCCESS 2025-07-12 13:46:16.029749 | orchestrator | 2025-07-12 13:46:16.029792 | orchestrator | 2025-07-12 13:46:16.029805 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:46:16.029816 | orchestrator | 2025-07-12 13:46:16.029827 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:46:16.029838 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.301) 0:00:00.301 ********* 2025-07-12 13:46:16.029850 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:16.029862 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:16.029873 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:16.029884 | orchestrator | 2025-07-12 13:46:16.029895 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:46:16.029906 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.370) 0:00:00.671 ********* 2025-07-12 13:46:16.029918 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-07-12 13:46:16.029929 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-07-12 13:46:16.029965 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-07-12 13:46:16.029977 | orchestrator | 2025-07-12 13:46:16.029988 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-07-12 13:46:16.029998 | orchestrator | 2025-07-12 13:46:16.030009 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-07-12 13:46:16.030093 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:00.690) 0:00:01.362 ********* 2025-07-12 13:46:16.030105 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:16.030118 | orchestrator | 2025-07-12 13:46:16.030129 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-07-12 13:46:16.030148 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.676) 0:00:02.038 ********* 2025-07-12 13:46:16.030160 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 13:46:16.030172 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 13:46:16.030183 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 13:46:16.030194 | orchestrator | 2025-07-12 13:46:16.030205 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-07-12 13:46:16.030215 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.870) 0:00:02.909 ********* 2025-07-12 13:46:16.030226 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 13:46:16.030237 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 13:46:16.030248 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 13:46:16.030259 | orchestrator | 2025-07-12 13:46:16.030270 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-07-12 13:46:16.030281 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:01.849) 0:00:04.759 ********* 2025-07-12 13:46:16.030292 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:16.030303 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:16.030314 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:16.030325 | orchestrator | 2025-07-12 13:46:16.030336 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-07-12 13:46:16.030347 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:01.811) 0:00:06.570 ********* 2025-07-12 13:46:16.030358 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:16.030369 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:16.030380 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:16.030391 | orchestrator | 2025-07-12 13:46:16.030402 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:16.030413 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:16.030440 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:16.030452 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:16.030463 | orchestrator | 2025-07-12 13:46:16.030474 | orchestrator | 2025-07-12 13:46:16.030485 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:16.030496 | orchestrator | Saturday 12 July 2025 13:45:58 +0000 (0:00:07.933) 0:00:14.503 ********* 2025-07-12 13:46:16.030507 | orchestrator | =============================================================================== 2025-07-12 13:46:16.030518 | orchestrator | memcached : Restart memcached container --------------------------------- 7.93s 2025-07-12 13:46:16.030529 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.85s 2025-07-12 13:46:16.030540 | orchestrator | memcached : Check memcached container ----------------------------------- 1.81s 2025-07-12 13:46:16.030551 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.87s 2025-07-12 13:46:16.030562 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-07-12 13:46:16.030573 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.68s 2025-07-12 13:46:16.030584 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-07-12 13:46:16.030595 | orchestrator | 2025-07-12 13:46:16.030606 | orchestrator | 2025-07-12 13:46:16.030617 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:46:16.030628 | orchestrator | 2025-07-12 13:46:16.030639 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:46:16.030650 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.314) 0:00:00.314 ********* 2025-07-12 13:46:16.030661 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:16.030672 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:16.030683 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:16.030694 | orchestrator | 2025-07-12 13:46:16.030705 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:46:16.030729 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.443) 0:00:00.757 ********* 2025-07-12 13:46:16.030740 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-07-12 13:46:16.030751 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-07-12 13:46:16.030762 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-07-12 13:46:16.030773 | orchestrator | 2025-07-12 13:46:16.030784 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-07-12 13:46:16.030795 | orchestrator | 2025-07-12 13:46:16.030805 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-07-12 13:46:16.030816 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:00.482) 0:00:01.239 ********* 2025-07-12 13:46:16.030828 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:16.030838 | orchestrator | 2025-07-12 13:46:16.030849 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-07-12 13:46:16.030860 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.641) 0:00:01.880 ********* 2025-07-12 13:46:16.030879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.030896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.030916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.030928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.030993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031029 | orchestrator | 2025-07-12 13:46:16.031041 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-07-12 13:46:16.031052 | orchestrator | Saturday 12 July 2025 13:45:47 +0000 (0:00:01.403) 0:00:03.283 ********* 2025-07-12 13:46:16.031064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031152 | orchestrator | 2025-07-12 13:46:16.031163 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-07-12 13:46:16.031174 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:02.742) 0:00:06.026 ********* 2025-07-12 13:46:16.031186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031269 | orchestrator | 2025-07-12 13:46:16.031285 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-07-12 13:46:16.031297 | orchestrator | Saturday 12 July 2025 13:45:53 +0000 (0:00:03.109) 0:00:09.135 ********* 2025-07-12 13:46:16.031308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 13:46:16.031387 | orchestrator | 2025-07-12 13:46:16.031398 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 13:46:16.031409 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:02.333) 0:00:11.469 ********* 2025-07-12 13:46:16.031420 | orchestrator | 2025-07-12 13:46:16.031431 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 13:46:16.031448 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.061) 0:00:11.531 ********* 2025-07-12 13:46:16.031459 | orchestrator | 2025-07-12 13:46:16.031470 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 13:46:16.031487 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.089) 0:00:11.620 ********* 2025-07-12 13:46:16.031498 | orchestrator | 2025-07-12 13:46:16.031509 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-07-12 13:46:16.031519 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.094) 0:00:11.714 ********* 2025-07-12 13:46:16.031530 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:16.031541 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:16.031552 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:16.031563 | orchestrator | 2025-07-12 13:46:16.031574 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-07-12 13:46:16.031585 | orchestrator | Saturday 12 July 2025 13:46:04 +0000 (0:00:08.714) 0:00:20.429 ********* 2025-07-12 13:46:16.031595 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:16.031606 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:16.031617 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:16.031628 | orchestrator | 2025-07-12 13:46:16.031639 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:16.031650 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:16.031661 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:16.031677 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:16.031688 | orchestrator | 2025-07-12 13:46:16.031699 | orchestrator | 2025-07-12 13:46:16.031710 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:16.031721 | orchestrator | Saturday 12 July 2025 13:46:13 +0000 (0:00:09.209) 0:00:29.638 ********* 2025-07-12 13:46:16.031731 | orchestrator | =============================================================================== 2025-07-12 13:46:16.031742 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.21s 2025-07-12 13:46:16.031753 | orchestrator | redis : Restart redis container ----------------------------------------- 8.71s 2025-07-12 13:46:16.031764 | orchestrator | redis : Copying over redis config files --------------------------------- 3.11s 2025-07-12 13:46:16.031775 | orchestrator | redis : Copying over default config.json files -------------------------- 2.74s 2025-07-12 13:46:16.031786 | orchestrator | redis : Check redis containers ------------------------------------------ 2.33s 2025-07-12 13:46:16.031796 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.40s 2025-07-12 13:46:16.031807 | orchestrator | redis : include_tasks --------------------------------------------------- 0.64s 2025-07-12 13:46:16.031818 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-07-12 13:46:16.031829 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-07-12 13:46:16.031839 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.25s 2025-07-12 13:46:16.031850 | orchestrator | 2025-07-12 13:46:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:19.061100 | orchestrator | 2025-07-12 13:46:19 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:19.061705 | orchestrator | 2025-07-12 13:46:19 | INFO  | Task bd1fd651-bc12-43dd-991c-5da8bf61dd8d is in state SUCCESS 2025-07-12 13:46:19.063888 | orchestrator | 2025-07-12 13:46:19.063965 | orchestrator | 2025-07-12 13:46:19.063978 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-07-12 13:46:19.063990 | orchestrator | 2025-07-12 13:46:19.064000 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-07-12 13:46:19.064010 | orchestrator | Saturday 12 July 2025 13:42:42 +0000 (0:00:00.214) 0:00:00.214 ********* 2025-07-12 13:46:19.064020 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:19.064031 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:19.064061 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:19.064071 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.064081 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.064091 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.064100 | orchestrator | 2025-07-12 13:46:19.064110 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-07-12 13:46:19.064120 | orchestrator | Saturday 12 July 2025 13:42:42 +0000 (0:00:00.721) 0:00:00.936 ********* 2025-07-12 13:46:19.064130 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.064140 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.064150 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.064159 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.064169 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.064179 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.064188 | orchestrator | 2025-07-12 13:46:19.064198 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-07-12 13:46:19.064208 | orchestrator | Saturday 12 July 2025 13:42:43 +0000 (0:00:00.613) 0:00:01.549 ********* 2025-07-12 13:46:19.064218 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.064227 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.064237 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.064246 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.064256 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.064271 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.064281 | orchestrator | 2025-07-12 13:46:19.064290 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-07-12 13:46:19.064300 | orchestrator | Saturday 12 July 2025 13:42:44 +0000 (0:00:00.598) 0:00:02.147 ********* 2025-07-12 13:46:19.064310 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:19.064319 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.064329 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:19.064338 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:19.064348 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.064357 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.064367 | orchestrator | 2025-07-12 13:46:19.064377 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-07-12 13:46:19.064386 | orchestrator | Saturday 12 July 2025 13:42:46 +0000 (0:00:01.894) 0:00:04.041 ********* 2025-07-12 13:46:19.064396 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:19.064405 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:19.064415 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:19.064425 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.064434 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.064444 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.064453 | orchestrator | 2025-07-12 13:46:19.064463 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-07-12 13:46:19.064473 | orchestrator | Saturday 12 July 2025 13:42:47 +0000 (0:00:01.321) 0:00:05.362 ********* 2025-07-12 13:46:19.064482 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:19.064492 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:19.064502 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:19.064511 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.064521 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.064531 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.064540 | orchestrator | 2025-07-12 13:46:19.064550 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-07-12 13:46:19.064559 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:01.066) 0:00:06.429 ********* 2025-07-12 13:46:19.064569 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.064579 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.064589 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.064599 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.064608 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.064625 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.064635 | orchestrator | 2025-07-12 13:46:19.064644 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-07-12 13:46:19.064654 | orchestrator | Saturday 12 July 2025 13:42:49 +0000 (0:00:00.608) 0:00:07.038 ********* 2025-07-12 13:46:19.064664 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.064673 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.064683 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.064693 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.064702 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.064712 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.064721 | orchestrator | 2025-07-12 13:46:19.064731 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-07-12 13:46:19.064741 | orchestrator | Saturday 12 July 2025 13:42:49 +0000 (0:00:00.598) 0:00:07.636 ********* 2025-07-12 13:46:19.064750 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:19.064760 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:19.064770 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.064779 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:19.064789 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:19.064798 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.064808 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:19.064818 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:19.064827 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.064837 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:19.064860 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:19.064870 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.064879 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:19.064889 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:19.064899 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.064908 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 13:46:19.064918 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 13:46:19.064927 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.064955 | orchestrator | 2025-07-12 13:46:19.064965 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-07-12 13:46:19.064975 | orchestrator | Saturday 12 July 2025 13:42:50 +0000 (0:00:01.157) 0:00:08.794 ********* 2025-07-12 13:46:19.064984 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.064994 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.065003 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.065013 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.065022 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.065032 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.065041 | orchestrator | 2025-07-12 13:46:19.065051 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-07-12 13:46:19.065061 | orchestrator | Saturday 12 July 2025 13:42:52 +0000 (0:00:01.480) 0:00:10.274 ********* 2025-07-12 13:46:19.065071 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:19.065080 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:19.065090 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:19.065099 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.065119 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.065129 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.065138 | orchestrator | 2025-07-12 13:46:19.065148 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-07-12 13:46:19.065164 | orchestrator | Saturday 12 July 2025 13:42:53 +0000 (0:00:00.945) 0:00:11.219 ********* 2025-07-12 13:46:19.065174 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:19.065183 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:19.065193 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:19.065202 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.065212 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.065221 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.065231 | orchestrator | 2025-07-12 13:46:19.065240 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-07-12 13:46:19.065250 | orchestrator | Saturday 12 July 2025 13:42:58 +0000 (0:00:05.740) 0:00:16.960 ********* 2025-07-12 13:46:19.065260 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.065269 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.065279 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.065288 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.065298 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.065307 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.065317 | orchestrator | 2025-07-12 13:46:19.065326 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-07-12 13:46:19.065336 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:00.936) 0:00:17.896 ********* 2025-07-12 13:46:19.065346 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.065355 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.065365 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.065374 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.065384 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.065393 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.065403 | orchestrator | 2025-07-12 13:46:19.065412 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-07-12 13:46:19.065428 | orchestrator | Saturday 12 July 2025 13:43:01 +0000 (0:00:01.834) 0:00:19.731 ********* 2025-07-12 13:46:19.065438 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:19.065447 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:19.065457 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:19.065467 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.065476 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.065486 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.065495 | orchestrator | 2025-07-12 13:46:19.065505 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-07-12 13:46:19.065515 | orchestrator | Saturday 12 July 2025 13:43:02 +0000 (0:00:00.942) 0:00:20.674 ********* 2025-07-12 13:46:19.065525 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-07-12 13:46:19.065534 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-07-12 13:46:19.065544 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-07-12 13:46:19.065554 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-07-12 13:46:19.065563 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-07-12 13:46:19.065573 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-07-12 13:46:19.065582 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-07-12 13:46:19.065592 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-07-12 13:46:19.065601 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-07-12 13:46:19.065611 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-07-12 13:46:19.065620 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-07-12 13:46:19.065630 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-07-12 13:46:19.065640 | orchestrator | 2025-07-12 13:46:19.065649 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-07-12 13:46:19.065659 | orchestrator | Saturday 12 July 2025 13:43:04 +0000 (0:00:02.275) 0:00:22.949 ********* 2025-07-12 13:46:19.065674 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:19.065684 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:19.065693 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:19.065703 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.065713 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.065722 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.065732 | orchestrator | 2025-07-12 13:46:19.065747 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-07-12 13:46:19.065757 | orchestrator | 2025-07-12 13:46:19.065813 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-07-12 13:46:19.065825 | orchestrator | Saturday 12 July 2025 13:43:07 +0000 (0:00:02.253) 0:00:25.203 ********* 2025-07-12 13:46:19.065835 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.065844 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.065854 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.065863 | orchestrator | 2025-07-12 13:46:19.065884 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-07-12 13:46:19.065894 | orchestrator | Saturday 12 July 2025 13:43:08 +0000 (0:00:01.340) 0:00:26.543 ********* 2025-07-12 13:46:19.065904 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.065913 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.065923 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.065932 | orchestrator | 2025-07-12 13:46:19.065956 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-07-12 13:46:19.065965 | orchestrator | Saturday 12 July 2025 13:43:10 +0000 (0:00:01.489) 0:00:28.032 ********* 2025-07-12 13:46:19.065975 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.065984 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.065994 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.066003 | orchestrator | 2025-07-12 13:46:19.066013 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-07-12 13:46:19.066091 | orchestrator | Saturday 12 July 2025 13:43:11 +0000 (0:00:01.119) 0:00:29.152 ********* 2025-07-12 13:46:19.066102 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.066112 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.066122 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.066131 | orchestrator | 2025-07-12 13:46:19.066141 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-07-12 13:46:19.066151 | orchestrator | Saturday 12 July 2025 13:43:11 +0000 (0:00:00.811) 0:00:29.963 ********* 2025-07-12 13:46:19.066160 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.066170 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.066180 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.066190 | orchestrator | 2025-07-12 13:46:19.066199 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-07-12 13:46:19.066209 | orchestrator | Saturday 12 July 2025 13:43:12 +0000 (0:00:00.488) 0:00:30.452 ********* 2025-07-12 13:46:19.066219 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.066228 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.066238 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.066247 | orchestrator | 2025-07-12 13:46:19.066257 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-07-12 13:46:19.066267 | orchestrator | Saturday 12 July 2025 13:43:13 +0000 (0:00:00.819) 0:00:31.271 ********* 2025-07-12 13:46:19.066277 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.066287 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.066296 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.066306 | orchestrator | 2025-07-12 13:46:19.066316 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-07-12 13:46:19.066325 | orchestrator | Saturday 12 July 2025 13:43:14 +0000 (0:00:01.715) 0:00:32.987 ********* 2025-07-12 13:46:19.066335 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:19.066345 | orchestrator | 2025-07-12 13:46:19.066354 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-07-12 13:46:19.066371 | orchestrator | Saturday 12 July 2025 13:43:15 +0000 (0:00:00.639) 0:00:33.627 ********* 2025-07-12 13:46:19.066382 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.066391 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.066401 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.066411 | orchestrator | 2025-07-12 13:46:19.066421 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-07-12 13:46:19.066476 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:01.407) 0:00:35.034 ********* 2025-07-12 13:46:19.066492 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.066502 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.066512 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.066522 | orchestrator | 2025-07-12 13:46:19.066531 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-07-12 13:46:19.066541 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:00.926) 0:00:35.961 ********* 2025-07-12 13:46:19.066551 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.066560 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.066570 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.066580 | orchestrator | 2025-07-12 13:46:19.066589 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-07-12 13:46:19.066599 | orchestrator | Saturday 12 July 2025 13:43:19 +0000 (0:00:01.291) 0:00:37.253 ********* 2025-07-12 13:46:19.066608 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.066618 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.066628 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.066637 | orchestrator | 2025-07-12 13:46:19.066647 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-07-12 13:46:19.066657 | orchestrator | Saturday 12 July 2025 13:43:20 +0000 (0:00:01.554) 0:00:38.807 ********* 2025-07-12 13:46:19.066666 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.066676 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.066685 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.066695 | orchestrator | 2025-07-12 13:46:19.066705 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-07-12 13:46:19.066715 | orchestrator | Saturday 12 July 2025 13:43:21 +0000 (0:00:00.420) 0:00:39.228 ********* 2025-07-12 13:46:19.066725 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.066735 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.066744 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.066754 | orchestrator | 2025-07-12 13:46:19.066764 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-07-12 13:46:19.066773 | orchestrator | Saturday 12 July 2025 13:43:21 +0000 (0:00:00.475) 0:00:39.703 ********* 2025-07-12 13:46:19.066783 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.066793 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.066803 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.066812 | orchestrator | 2025-07-12 13:46:19.066831 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-07-12 13:46:19.066841 | orchestrator | Saturday 12 July 2025 13:43:22 +0000 (0:00:01.257) 0:00:40.960 ********* 2025-07-12 13:46:19.066851 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 13:46:19.066861 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 13:46:19.066871 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 13:46:19.066881 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 13:46:19.066891 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 13:46:19.066913 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 13:46:19.066922 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 13:46:19.066932 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 13:46:19.066960 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 13:46:19.066970 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 13:46:19.066980 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 13:46:19.066990 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 13:46:19.067000 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 13:46:19.067010 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 13:46:19.067019 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 13:46:19.067029 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.067039 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.067049 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.067059 | orchestrator | 2025-07-12 13:46:19.067069 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-07-12 13:46:19.067079 | orchestrator | Saturday 12 July 2025 13:44:18 +0000 (0:00:55.880) 0:01:36.841 ********* 2025-07-12 13:46:19.067094 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.067104 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.067114 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.067124 | orchestrator | 2025-07-12 13:46:19.067134 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-07-12 13:46:19.067144 | orchestrator | Saturday 12 July 2025 13:44:19 +0000 (0:00:00.541) 0:01:37.382 ********* 2025-07-12 13:46:19.067154 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.067163 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.067173 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.067183 | orchestrator | 2025-07-12 13:46:19.067193 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-07-12 13:46:19.067202 | orchestrator | Saturday 12 July 2025 13:44:21 +0000 (0:00:01.880) 0:01:39.262 ********* 2025-07-12 13:46:19.067212 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.067222 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.067232 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.067241 | orchestrator | 2025-07-12 13:46:19.067251 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-07-12 13:46:19.067261 | orchestrator | Saturday 12 July 2025 13:44:22 +0000 (0:00:01.212) 0:01:40.475 ********* 2025-07-12 13:46:19.067271 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.067281 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.067291 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.067301 | orchestrator | 2025-07-12 13:46:19.067310 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-07-12 13:46:19.067320 | orchestrator | Saturday 12 July 2025 13:44:44 +0000 (0:00:21.958) 0:02:02.433 ********* 2025-07-12 13:46:19.067330 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.067346 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.067356 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.067366 | orchestrator | 2025-07-12 13:46:19.067375 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-07-12 13:46:19.067385 | orchestrator | Saturday 12 July 2025 13:44:45 +0000 (0:00:00.731) 0:02:03.165 ********* 2025-07-12 13:46:19.067395 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.067405 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.067414 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.067424 | orchestrator | 2025-07-12 13:46:19.067439 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-07-12 13:46:19.067450 | orchestrator | Saturday 12 July 2025 13:44:46 +0000 (0:00:01.215) 0:02:04.381 ********* 2025-07-12 13:46:19.067460 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.067470 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.067480 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.067490 | orchestrator | 2025-07-12 13:46:19.067500 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-07-12 13:46:19.067509 | orchestrator | Saturday 12 July 2025 13:44:47 +0000 (0:00:00.693) 0:02:05.074 ********* 2025-07-12 13:46:19.067519 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.067529 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.067539 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.067548 | orchestrator | 2025-07-12 13:46:19.067558 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-07-12 13:46:19.067568 | orchestrator | Saturday 12 July 2025 13:44:47 +0000 (0:00:00.760) 0:02:05.834 ********* 2025-07-12 13:46:19.067578 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.067588 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.067597 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.067607 | orchestrator | 2025-07-12 13:46:19.067617 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-07-12 13:46:19.067627 | orchestrator | Saturday 12 July 2025 13:44:48 +0000 (0:00:00.356) 0:02:06.191 ********* 2025-07-12 13:46:19.067637 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.067647 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.067656 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.067666 | orchestrator | 2025-07-12 13:46:19.067676 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-07-12 13:46:19.067686 | orchestrator | Saturday 12 July 2025 13:44:49 +0000 (0:00:01.163) 0:02:07.355 ********* 2025-07-12 13:46:19.067695 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.067705 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.067715 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.067724 | orchestrator | 2025-07-12 13:46:19.067734 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-07-12 13:46:19.067744 | orchestrator | Saturday 12 July 2025 13:44:50 +0000 (0:00:00.655) 0:02:08.010 ********* 2025-07-12 13:46:19.067754 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.067764 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.067773 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.067783 | orchestrator | 2025-07-12 13:46:19.067793 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-07-12 13:46:19.067803 | orchestrator | Saturday 12 July 2025 13:44:50 +0000 (0:00:00.889) 0:02:08.899 ********* 2025-07-12 13:46:19.067812 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:19.067822 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:19.067832 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:19.067842 | orchestrator | 2025-07-12 13:46:19.067852 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-07-12 13:46:19.067862 | orchestrator | Saturday 12 July 2025 13:44:51 +0000 (0:00:00.873) 0:02:09.773 ********* 2025-07-12 13:46:19.067872 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.067882 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.067891 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.067907 | orchestrator | 2025-07-12 13:46:19.067917 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-07-12 13:46:19.067927 | orchestrator | Saturday 12 July 2025 13:44:52 +0000 (0:00:00.537) 0:02:10.310 ********* 2025-07-12 13:46:19.067963 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.067974 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.067983 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.067993 | orchestrator | 2025-07-12 13:46:19.068003 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-07-12 13:46:19.068012 | orchestrator | Saturday 12 July 2025 13:44:52 +0000 (0:00:00.324) 0:02:10.635 ********* 2025-07-12 13:46:19.068022 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.068037 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.068047 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.068057 | orchestrator | 2025-07-12 13:46:19.068066 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-07-12 13:46:19.068076 | orchestrator | Saturday 12 July 2025 13:44:53 +0000 (0:00:00.677) 0:02:11.313 ********* 2025-07-12 13:46:19.068086 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.068095 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.068105 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.068115 | orchestrator | 2025-07-12 13:46:19.068125 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-07-12 13:46:19.068134 | orchestrator | Saturday 12 July 2025 13:44:53 +0000 (0:00:00.617) 0:02:11.930 ********* 2025-07-12 13:46:19.068144 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 13:46:19.068154 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 13:46:19.068164 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 13:46:19.068173 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 13:46:19.068183 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 13:46:19.068193 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 13:46:19.068202 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 13:46:19.068212 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-07-12 13:46:19.068222 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 13:46:19.068238 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 13:46:19.068248 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-07-12 13:46:19.068258 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 13:46:19.068268 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 13:46:19.068277 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 13:46:19.068287 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 13:46:19.068297 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 13:46:19.068306 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 13:46:19.068316 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 13:46:19.068326 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 13:46:19.068335 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 13:46:19.068351 | orchestrator | 2025-07-12 13:46:19.068361 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-07-12 13:46:19.068371 | orchestrator | 2025-07-12 13:46:19.068381 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-07-12 13:46:19.068391 | orchestrator | Saturday 12 July 2025 13:44:57 +0000 (0:00:03.172) 0:02:15.102 ********* 2025-07-12 13:46:19.068401 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:19.068411 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:19.068420 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:19.068430 | orchestrator | 2025-07-12 13:46:19.068440 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-07-12 13:46:19.068450 | orchestrator | Saturday 12 July 2025 13:44:57 +0000 (0:00:00.326) 0:02:15.429 ********* 2025-07-12 13:46:19.068459 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:19.068469 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:19.068479 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:19.068489 | orchestrator | 2025-07-12 13:46:19.068498 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-07-12 13:46:19.068508 | orchestrator | Saturday 12 July 2025 13:44:58 +0000 (0:00:00.640) 0:02:16.069 ********* 2025-07-12 13:46:19.068518 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:19.068527 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:19.068537 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:19.068547 | orchestrator | 2025-07-12 13:46:19.068557 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-07-12 13:46:19.068567 | orchestrator | Saturday 12 July 2025 13:44:58 +0000 (0:00:00.500) 0:02:16.569 ********* 2025-07-12 13:46:19.068577 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:46:19.068587 | orchestrator | 2025-07-12 13:46:19.068596 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-07-12 13:46:19.068606 | orchestrator | Saturday 12 July 2025 13:44:59 +0000 (0:00:00.469) 0:02:17.039 ********* 2025-07-12 13:46:19.068616 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.068626 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.068636 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.068645 | orchestrator | 2025-07-12 13:46:19.068655 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-07-12 13:46:19.068665 | orchestrator | Saturday 12 July 2025 13:44:59 +0000 (0:00:00.304) 0:02:17.344 ********* 2025-07-12 13:46:19.068679 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.068689 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.068699 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.068709 | orchestrator | 2025-07-12 13:46:19.068718 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-07-12 13:46:19.068728 | orchestrator | Saturday 12 July 2025 13:44:59 +0000 (0:00:00.465) 0:02:17.809 ********* 2025-07-12 13:46:19.068738 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.068748 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.068757 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.068767 | orchestrator | 2025-07-12 13:46:19.068777 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-07-12 13:46:19.068787 | orchestrator | Saturday 12 July 2025 13:45:00 +0000 (0:00:00.281) 0:02:18.090 ********* 2025-07-12 13:46:19.068796 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:19.068806 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:19.068816 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:19.068826 | orchestrator | 2025-07-12 13:46:19.068835 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-07-12 13:46:19.068845 | orchestrator | Saturday 12 July 2025 13:45:00 +0000 (0:00:00.630) 0:02:18.720 ********* 2025-07-12 13:46:19.068855 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:19.068865 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:19.068880 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:19.068890 | orchestrator | 2025-07-12 13:46:19.068900 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-07-12 13:46:19.068909 | orchestrator | Saturday 12 July 2025 13:45:01 +0000 (0:00:01.139) 0:02:19.860 ********* 2025-07-12 13:46:19.068919 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:19.068929 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:19.068988 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:19.068999 | orchestrator | 2025-07-12 13:46:19.069009 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-07-12 13:46:19.069019 | orchestrator | Saturday 12 July 2025 13:45:03 +0000 (0:00:01.756) 0:02:21.616 ********* 2025-07-12 13:46:19.069029 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:19.069039 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:19.069048 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:19.069058 | orchestrator | 2025-07-12 13:46:19.069074 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 13:46:19.069084 | orchestrator | 2025-07-12 13:46:19.069094 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 13:46:19.069104 | orchestrator | Saturday 12 July 2025 13:45:15 +0000 (0:00:12.151) 0:02:33.767 ********* 2025-07-12 13:46:19.069113 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:19.069123 | orchestrator | 2025-07-12 13:46:19.069133 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 13:46:19.069143 | orchestrator | Saturday 12 July 2025 13:45:16 +0000 (0:00:00.750) 0:02:34.518 ********* 2025-07-12 13:46:19.069153 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:19.069163 | orchestrator | 2025-07-12 13:46:19.069172 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 13:46:19.069182 | orchestrator | Saturday 12 July 2025 13:45:17 +0000 (0:00:00.547) 0:02:35.065 ********* 2025-07-12 13:46:19.069192 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 13:46:19.069202 | orchestrator | 2025-07-12 13:46:19.069212 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 13:46:19.069221 | orchestrator | Saturday 12 July 2025 13:45:18 +0000 (0:00:01.072) 0:02:36.138 ********* 2025-07-12 13:46:19.069231 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:19.069241 | orchestrator | 2025-07-12 13:46:19.069251 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 13:46:19.069261 | orchestrator | Saturday 12 July 2025 13:45:18 +0000 (0:00:00.836) 0:02:36.974 ********* 2025-07-12 13:46:19.069270 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:19.069280 | orchestrator | 2025-07-12 13:46:19.069290 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 13:46:19.069300 | orchestrator | Saturday 12 July 2025 13:45:19 +0000 (0:00:00.573) 0:02:37.548 ********* 2025-07-12 13:46:19.069310 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:46:19.069319 | orchestrator | 2025-07-12 13:46:19.069329 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 13:46:19.069339 | orchestrator | Saturday 12 July 2025 13:45:21 +0000 (0:00:01.524) 0:02:39.072 ********* 2025-07-12 13:46:19.069349 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:46:19.069359 | orchestrator | 2025-07-12 13:46:19.069369 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 13:46:19.069379 | orchestrator | Saturday 12 July 2025 13:45:21 +0000 (0:00:00.804) 0:02:39.877 ********* 2025-07-12 13:46:19.069389 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:19.069398 | orchestrator | 2025-07-12 13:46:19.069408 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 13:46:19.069418 | orchestrator | Saturday 12 July 2025 13:45:22 +0000 (0:00:00.448) 0:02:40.325 ********* 2025-07-12 13:46:19.069428 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:19.069438 | orchestrator | 2025-07-12 13:46:19.069448 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-07-12 13:46:19.069464 | orchestrator | 2025-07-12 13:46:19.069474 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-07-12 13:46:19.069483 | orchestrator | Saturday 12 July 2025 13:45:22 +0000 (0:00:00.437) 0:02:40.763 ********* 2025-07-12 13:46:19.069493 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:19.069503 | orchestrator | 2025-07-12 13:46:19.069513 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-07-12 13:46:19.069522 | orchestrator | Saturday 12 July 2025 13:45:22 +0000 (0:00:00.141) 0:02:40.904 ********* 2025-07-12 13:46:19.069532 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:46:19.069542 | orchestrator | 2025-07-12 13:46:19.069552 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-07-12 13:46:19.069567 | orchestrator | Saturday 12 July 2025 13:45:23 +0000 (0:00:00.419) 0:02:41.324 ********* 2025-07-12 13:46:19.069577 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:19.069587 | orchestrator | 2025-07-12 13:46:19.069597 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-07-12 13:46:19.069606 | orchestrator | Saturday 12 July 2025 13:45:24 +0000 (0:00:00.804) 0:02:42.128 ********* 2025-07-12 13:46:19.069616 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:19.069626 | orchestrator | 2025-07-12 13:46:19.069636 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-07-12 13:46:19.069646 | orchestrator | Saturday 12 July 2025 13:45:25 +0000 (0:00:01.642) 0:02:43.770 ********* 2025-07-12 13:46:19.069655 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:19.069665 | orchestrator | 2025-07-12 13:46:19.069675 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-07-12 13:46:19.069685 | orchestrator | Saturday 12 July 2025 13:45:26 +0000 (0:00:00.773) 0:02:44.544 ********* 2025-07-12 13:46:19.069694 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:19.069704 | orchestrator | 2025-07-12 13:46:19.069714 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-07-12 13:46:19.069723 | orchestrator | Saturday 12 July 2025 13:45:27 +0000 (0:00:00.509) 0:02:45.054 ********* 2025-07-12 13:46:19.069733 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:19.069743 | orchestrator | 2025-07-12 13:46:19.069753 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-07-12 13:46:19.069763 | orchestrator | Saturday 12 July 2025 13:45:33 +0000 (0:00:06.584) 0:02:51.639 ********* 2025-07-12 13:46:19.069772 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:19.069782 | orchestrator | 2025-07-12 13:46:19.069792 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-07-12 13:46:19.069801 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:12.154) 0:03:03.793 ********* 2025-07-12 13:46:19.069811 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:19.069821 | orchestrator | 2025-07-12 13:46:19.069831 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-07-12 13:46:19.069840 | orchestrator | 2025-07-12 13:46:19.069850 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-07-12 13:46:19.069864 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.490) 0:03:04.284 ********* 2025-07-12 13:46:19.069874 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.069884 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.069894 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.069904 | orchestrator | 2025-07-12 13:46:19.069914 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-07-12 13:46:19.069923 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.442) 0:03:04.726 ********* 2025-07-12 13:46:19.069933 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.069961 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.069971 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.069981 | orchestrator | 2025-07-12 13:46:19.069990 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-07-12 13:46:19.070007 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.265) 0:03:04.992 ********* 2025-07-12 13:46:19.070045 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:19.070057 | orchestrator | 2025-07-12 13:46:19.070067 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-07-12 13:46:19.070076 | orchestrator | Saturday 12 July 2025 13:45:47 +0000 (0:00:00.396) 0:03:05.389 ********* 2025-07-12 13:46:19.070086 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070096 | orchestrator | 2025-07-12 13:46:19.070106 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-07-12 13:46:19.070116 | orchestrator | Saturday 12 July 2025 13:45:47 +0000 (0:00:00.454) 0:03:05.844 ********* 2025-07-12 13:46:19.070126 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070136 | orchestrator | 2025-07-12 13:46:19.070146 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-07-12 13:46:19.070155 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:00.186) 0:03:06.030 ********* 2025-07-12 13:46:19.070165 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070175 | orchestrator | 2025-07-12 13:46:19.070185 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-07-12 13:46:19.070195 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:00.177) 0:03:06.208 ********* 2025-07-12 13:46:19.070205 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070214 | orchestrator | 2025-07-12 13:46:19.070224 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-07-12 13:46:19.070234 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:00.211) 0:03:06.419 ********* 2025-07-12 13:46:19.070244 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070253 | orchestrator | 2025-07-12 13:46:19.070263 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-07-12 13:46:19.070273 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:00.196) 0:03:06.615 ********* 2025-07-12 13:46:19.070283 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070292 | orchestrator | 2025-07-12 13:46:19.070302 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-07-12 13:46:19.070312 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:00.181) 0:03:06.797 ********* 2025-07-12 13:46:19.070322 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070333 | orchestrator | 2025-07-12 13:46:19.070350 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-07-12 13:46:19.070366 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.225) 0:03:07.022 ********* 2025-07-12 13:46:19.070382 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070399 | orchestrator | 2025-07-12 13:46:19.070415 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-07-12 13:46:19.070429 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.208) 0:03:07.231 ********* 2025-07-12 13:46:19.070439 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070449 | orchestrator | 2025-07-12 13:46:19.070458 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-07-12 13:46:19.070474 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.176) 0:03:07.408 ********* 2025-07-12 13:46:19.070483 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-07-12 13:46:19.070493 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-07-12 13:46:19.070503 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070513 | orchestrator | 2025-07-12 13:46:19.070522 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-07-12 13:46:19.070532 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.243) 0:03:07.651 ********* 2025-07-12 13:46:19.070542 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070551 | orchestrator | 2025-07-12 13:46:19.070561 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-07-12 13:46:19.070578 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.164) 0:03:07.816 ********* 2025-07-12 13:46:19.070588 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070597 | orchestrator | 2025-07-12 13:46:19.070607 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-07-12 13:46:19.070617 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.167) 0:03:07.984 ********* 2025-07-12 13:46:19.070627 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070636 | orchestrator | 2025-07-12 13:46:19.070646 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-07-12 13:46:19.070656 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.518) 0:03:08.503 ********* 2025-07-12 13:46:19.070665 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070675 | orchestrator | 2025-07-12 13:46:19.070685 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-07-12 13:46:19.070694 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.198) 0:03:08.701 ********* 2025-07-12 13:46:19.070704 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070713 | orchestrator | 2025-07-12 13:46:19.070723 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-07-12 13:46:19.070733 | orchestrator | Saturday 12 July 2025 13:45:50 +0000 (0:00:00.195) 0:03:08.897 ********* 2025-07-12 13:46:19.070743 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070752 | orchestrator | 2025-07-12 13:46:19.070762 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-07-12 13:46:19.070779 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:00:00.204) 0:03:09.101 ********* 2025-07-12 13:46:19.070789 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070799 | orchestrator | 2025-07-12 13:46:19.070808 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-07-12 13:46:19.070818 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:00:00.184) 0:03:09.286 ********* 2025-07-12 13:46:19.070828 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070838 | orchestrator | 2025-07-12 13:46:19.070847 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-07-12 13:46:19.070857 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:00:00.203) 0:03:09.489 ********* 2025-07-12 13:46:19.070867 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070876 | orchestrator | 2025-07-12 13:46:19.070886 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-07-12 13:46:19.070896 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:00:00.190) 0:03:09.680 ********* 2025-07-12 13:46:19.070906 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.070915 | orchestrator | 2025-07-12 13:46:19.070925 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-07-12 13:46:19.070982 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:00:00.176) 0:03:09.857 ********* 2025-07-12 13:46:19.070995 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.071005 | orchestrator | 2025-07-12 13:46:19.071015 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-07-12 13:46:19.071025 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.205) 0:03:10.062 ********* 2025-07-12 13:46:19.071035 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-07-12 13:46:19.071045 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-07-12 13:46:19.071055 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-07-12 13:46:19.071064 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-07-12 13:46:19.071074 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.071084 | orchestrator | 2025-07-12 13:46:19.071094 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-07-12 13:46:19.071104 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.470) 0:03:10.533 ********* 2025-07-12 13:46:19.071113 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.071130 | orchestrator | 2025-07-12 13:46:19.071140 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-07-12 13:46:19.071150 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.212) 0:03:10.746 ********* 2025-07-12 13:46:19.071159 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.071169 | orchestrator | 2025-07-12 13:46:19.071179 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-07-12 13:46:19.071189 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.202) 0:03:10.948 ********* 2025-07-12 13:46:19.071199 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.071209 | orchestrator | 2025-07-12 13:46:19.071218 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-07-12 13:46:19.071228 | orchestrator | Saturday 12 July 2025 13:45:53 +0000 (0:00:00.174) 0:03:11.123 ********* 2025-07-12 13:46:19.071238 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.071248 | orchestrator | 2025-07-12 13:46:19.071258 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-07-12 13:46:19.071267 | orchestrator | Saturday 12 July 2025 13:45:53 +0000 (0:00:00.538) 0:03:11.661 ********* 2025-07-12 13:46:19.071277 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-07-12 13:46:19.071287 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-07-12 13:46:19.071297 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.071306 | orchestrator | 2025-07-12 13:46:19.071321 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-07-12 13:46:19.071331 | orchestrator | Saturday 12 July 2025 13:45:53 +0000 (0:00:00.312) 0:03:11.974 ********* 2025-07-12 13:46:19.071341 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.071351 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.071360 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.071370 | orchestrator | 2025-07-12 13:46:19.071380 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-07-12 13:46:19.071390 | orchestrator | Saturday 12 July 2025 13:45:54 +0000 (0:00:00.339) 0:03:12.314 ********* 2025-07-12 13:46:19.071400 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.071409 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.071419 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.071429 | orchestrator | 2025-07-12 13:46:19.071439 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-07-12 13:46:19.071449 | orchestrator | 2025-07-12 13:46:19.071459 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-07-12 13:46:19.071469 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.879) 0:03:13.194 ********* 2025-07-12 13:46:19.071479 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:19.071489 | orchestrator | 2025-07-12 13:46:19.071497 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-07-12 13:46:19.071504 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.271) 0:03:13.465 ********* 2025-07-12 13:46:19.071513 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 13:46:19.071521 | orchestrator | 2025-07-12 13:46:19.071529 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-07-12 13:46:19.071537 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.206) 0:03:13.672 ********* 2025-07-12 13:46:19.071545 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:19.071553 | orchestrator | 2025-07-12 13:46:19.071561 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-07-12 13:46:19.071568 | orchestrator | 2025-07-12 13:46:19.071577 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-07-12 13:46:19.071589 | orchestrator | Saturday 12 July 2025 13:46:01 +0000 (0:00:05.610) 0:03:19.282 ********* 2025-07-12 13:46:19.071598 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:19.071606 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:19.071614 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:19.071634 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:19.071642 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:19.071650 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:19.071657 | orchestrator | 2025-07-12 13:46:19.071666 | orchestrator | TASK [Manage labels] *********************************************************** 2025-07-12 13:46:19.071673 | orchestrator | Saturday 12 July 2025 13:46:01 +0000 (0:00:00.600) 0:03:19.883 ********* 2025-07-12 13:46:19.071681 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 13:46:19.071689 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 13:46:19.071697 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 13:46:19.071705 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 13:46:19.071713 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 13:46:19.071721 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 13:46:19.071729 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 13:46:19.071736 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 13:46:19.071744 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 13:46:19.071752 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 13:46:19.071760 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 13:46:19.071768 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 13:46:19.071776 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 13:46:19.071784 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 13:46:19.071792 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 13:46:19.071800 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 13:46:19.071808 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 13:46:19.071816 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 13:46:19.071823 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 13:46:19.071831 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 13:46:19.071839 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 13:46:19.071847 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 13:46:19.071855 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 13:46:19.071863 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 13:46:19.071874 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 13:46:19.071882 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 13:46:19.071890 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 13:46:19.071898 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 13:46:19.071906 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 13:46:19.071913 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 13:46:19.071921 | orchestrator | 2025-07-12 13:46:19.071969 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-07-12 13:46:19.071979 | orchestrator | Saturday 12 July 2025 13:46:15 +0000 (0:00:13.326) 0:03:33.210 ********* 2025-07-12 13:46:19.071987 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.071995 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.072003 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.072010 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.072018 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.072026 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.072034 | orchestrator | 2025-07-12 13:46:19.072042 | orchestrator | TASK [Manage taints] *********************************************************** 2025-07-12 13:46:19.072050 | orchestrator | Saturday 12 July 2025 13:46:15 +0000 (0:00:00.474) 0:03:33.685 ********* 2025-07-12 13:46:19.072058 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:19.072066 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:19.072073 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:19.072081 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:19.072089 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:19.072097 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:19.072105 | orchestrator | 2025-07-12 13:46:19.072112 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:19.072126 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:19.072136 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-07-12 13:46:19.072144 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 13:46:19.072152 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 13:46:19.072160 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 13:46:19.072168 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 13:46:19.072176 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 13:46:19.072184 | orchestrator | 2025-07-12 13:46:19.072192 | orchestrator | 2025-07-12 13:46:19.072200 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:19.072208 | orchestrator | Saturday 12 July 2025 13:46:16 +0000 (0:00:00.473) 0:03:34.159 ********* 2025-07-12 13:46:19.072216 | orchestrator | =============================================================================== 2025-07-12 13:46:19.072223 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.88s 2025-07-12 13:46:19.072231 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 21.96s 2025-07-12 13:46:19.072239 | orchestrator | Manage labels ---------------------------------------------------------- 13.33s 2025-07-12 13:46:19.072247 | orchestrator | kubectl : Install required packages ------------------------------------ 12.15s 2025-07-12 13:46:19.072255 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.15s 2025-07-12 13:46:19.072263 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.58s 2025-07-12 13:46:19.072271 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.74s 2025-07-12 13:46:19.072279 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.61s 2025-07-12 13:46:19.072286 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.17s 2025-07-12 13:46:19.072299 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.28s 2025-07-12 13:46:19.072307 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.25s 2025-07-12 13:46:19.072315 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.89s 2025-07-12 13:46:19.072323 | orchestrator | k3s_server : Kill the temporary service used for initialization --------- 1.88s 2025-07-12 13:46:19.072330 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.83s 2025-07-12 13:46:19.072338 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.76s 2025-07-12 13:46:19.072350 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.72s 2025-07-12 13:46:19.072358 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.64s 2025-07-12 13:46:19.072366 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.55s 2025-07-12 13:46:19.072373 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2025-07-12 13:46:19.072381 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.49s 2025-07-12 13:46:19.072389 | orchestrator | 2025-07-12 13:46:19 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:19.072397 | orchestrator | 2025-07-12 13:46:19 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:19.072505 | orchestrator | 2025-07-12 13:46:19 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:19.078680 | orchestrator | 2025-07-12 13:46:19 | INFO  | Task 06cd14b6-55f6-4cc1-ac46-d6ccec9998f5 is in state STARTED 2025-07-12 13:46:19.083431 | orchestrator | 2025-07-12 13:46:19 | INFO  | Task 041d1c0e-0750-484f-82ba-bb7beb331171 is in state STARTED 2025-07-12 13:46:19.083459 | orchestrator | 2025-07-12 13:46:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:22.126334 | orchestrator | 2025-07-12 13:46:22 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:22.126740 | orchestrator | 2025-07-12 13:46:22 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:22.129345 | orchestrator | 2025-07-12 13:46:22 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:22.130398 | orchestrator | 2025-07-12 13:46:22 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:22.132145 | orchestrator | 2025-07-12 13:46:22 | INFO  | Task 06cd14b6-55f6-4cc1-ac46-d6ccec9998f5 is in state STARTED 2025-07-12 13:46:22.134444 | orchestrator | 2025-07-12 13:46:22 | INFO  | Task 041d1c0e-0750-484f-82ba-bb7beb331171 is in state STARTED 2025-07-12 13:46:22.134455 | orchestrator | 2025-07-12 13:46:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:25.171222 | orchestrator | 2025-07-12 13:46:25 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:25.172324 | orchestrator | 2025-07-12 13:46:25 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:25.173087 | orchestrator | 2025-07-12 13:46:25 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:25.174596 | orchestrator | 2025-07-12 13:46:25 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:25.175396 | orchestrator | 2025-07-12 13:46:25 | INFO  | Task 06cd14b6-55f6-4cc1-ac46-d6ccec9998f5 is in state STARTED 2025-07-12 13:46:25.176986 | orchestrator | 2025-07-12 13:46:25 | INFO  | Task 041d1c0e-0750-484f-82ba-bb7beb331171 is in state SUCCESS 2025-07-12 13:46:25.177010 | orchestrator | 2025-07-12 13:46:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:28.236713 | orchestrator | 2025-07-12 13:46:28 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:28.237976 | orchestrator | 2025-07-12 13:46:28 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:28.238640 | orchestrator | 2025-07-12 13:46:28 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:28.239384 | orchestrator | 2025-07-12 13:46:28 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:28.240678 | orchestrator | 2025-07-12 13:46:28 | INFO  | Task 06cd14b6-55f6-4cc1-ac46-d6ccec9998f5 is in state SUCCESS 2025-07-12 13:46:28.240708 | orchestrator | 2025-07-12 13:46:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:31.279844 | orchestrator | 2025-07-12 13:46:31 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:31.280070 | orchestrator | 2025-07-12 13:46:31 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:31.280087 | orchestrator | 2025-07-12 13:46:31 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:31.280680 | orchestrator | 2025-07-12 13:46:31 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:31.280703 | orchestrator | 2025-07-12 13:46:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:34.320586 | orchestrator | 2025-07-12 13:46:34 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:34.322069 | orchestrator | 2025-07-12 13:46:34 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:34.324072 | orchestrator | 2025-07-12 13:46:34 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:34.324728 | orchestrator | 2025-07-12 13:46:34 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:34.324759 | orchestrator | 2025-07-12 13:46:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:37.380211 | orchestrator | 2025-07-12 13:46:37 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:37.380293 | orchestrator | 2025-07-12 13:46:37 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:37.380306 | orchestrator | 2025-07-12 13:46:37 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:37.384172 | orchestrator | 2025-07-12 13:46:37 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:37.384201 | orchestrator | 2025-07-12 13:46:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:40.420083 | orchestrator | 2025-07-12 13:46:40 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:40.422897 | orchestrator | 2025-07-12 13:46:40 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:40.425075 | orchestrator | 2025-07-12 13:46:40 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:40.426399 | orchestrator | 2025-07-12 13:46:40 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:40.426604 | orchestrator | 2025-07-12 13:46:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:43.463433 | orchestrator | 2025-07-12 13:46:43 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:43.465193 | orchestrator | 2025-07-12 13:46:43 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:43.466977 | orchestrator | 2025-07-12 13:46:43 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:43.471249 | orchestrator | 2025-07-12 13:46:43 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:43.471335 | orchestrator | 2025-07-12 13:46:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:46.511176 | orchestrator | 2025-07-12 13:46:46 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:46.511803 | orchestrator | 2025-07-12 13:46:46 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:46.512151 | orchestrator | 2025-07-12 13:46:46 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:46.513017 | orchestrator | 2025-07-12 13:46:46 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:46.513040 | orchestrator | 2025-07-12 13:46:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:49.564649 | orchestrator | 2025-07-12 13:46:49 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:49.564737 | orchestrator | 2025-07-12 13:46:49 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:49.566413 | orchestrator | 2025-07-12 13:46:49 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:49.568604 | orchestrator | 2025-07-12 13:46:49 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:49.568620 | orchestrator | 2025-07-12 13:46:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:52.616545 | orchestrator | 2025-07-12 13:46:52 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state STARTED 2025-07-12 13:46:52.617998 | orchestrator | 2025-07-12 13:46:52 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:52.619674 | orchestrator | 2025-07-12 13:46:52 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:52.620955 | orchestrator | 2025-07-12 13:46:52 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:52.620985 | orchestrator | 2025-07-12 13:46:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:55.678245 | orchestrator | 2025-07-12 13:46:55.678435 | orchestrator | 2025-07-12 13:46:55.678455 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-07-12 13:46:55.678469 | orchestrator | 2025-07-12 13:46:55.678481 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 13:46:55.678522 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:00.224) 0:00:00.224 ********* 2025-07-12 13:46:55.678535 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 13:46:55.678547 | orchestrator | 2025-07-12 13:46:55.678558 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 13:46:55.678569 | orchestrator | Saturday 12 July 2025 13:46:21 +0000 (0:00:00.751) 0:00:00.976 ********* 2025-07-12 13:46:55.678581 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:55.678592 | orchestrator | 2025-07-12 13:46:55.678603 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-07-12 13:46:55.678614 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:01.115) 0:00:02.092 ********* 2025-07-12 13:46:55.678625 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:55.678636 | orchestrator | 2025-07-12 13:46:55.678647 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:55.678659 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:55.678699 | orchestrator | 2025-07-12 13:46:55.678711 | orchestrator | 2025-07-12 13:46:55.678722 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:55.678733 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:00.336) 0:00:02.428 ********* 2025-07-12 13:46:55.678744 | orchestrator | =============================================================================== 2025-07-12 13:46:55.678755 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.12s 2025-07-12 13:46:55.678765 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2025-07-12 13:46:55.678776 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.34s 2025-07-12 13:46:55.678787 | orchestrator | 2025-07-12 13:46:55.678798 | orchestrator | 2025-07-12 13:46:55.678809 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 13:46:55.678820 | orchestrator | 2025-07-12 13:46:55.678831 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 13:46:55.678842 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-07-12 13:46:55.678853 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:55.678866 | orchestrator | 2025-07-12 13:46:55.678877 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 13:46:55.678913 | orchestrator | Saturday 12 July 2025 13:46:21 +0000 (0:00:00.717) 0:00:00.870 ********* 2025-07-12 13:46:55.678924 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:55.678935 | orchestrator | 2025-07-12 13:46:55.678946 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 13:46:55.678957 | orchestrator | Saturday 12 July 2025 13:46:21 +0000 (0:00:00.596) 0:00:01.466 ********* 2025-07-12 13:46:55.678968 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 13:46:55.678979 | orchestrator | 2025-07-12 13:46:55.678990 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 13:46:55.679000 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:00.731) 0:00:02.198 ********* 2025-07-12 13:46:55.679011 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:55.679022 | orchestrator | 2025-07-12 13:46:55.679033 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 13:46:55.679044 | orchestrator | Saturday 12 July 2025 13:46:23 +0000 (0:00:00.989) 0:00:03.187 ********* 2025-07-12 13:46:55.679055 | orchestrator | changed: [testbed-manager] 2025-07-12 13:46:55.679065 | orchestrator | 2025-07-12 13:46:55.679076 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 13:46:55.679087 | orchestrator | Saturday 12 July 2025 13:46:24 +0000 (0:00:00.737) 0:00:03.925 ********* 2025-07-12 13:46:55.679098 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:46:55.679109 | orchestrator | 2025-07-12 13:46:55.679120 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 13:46:55.679131 | orchestrator | Saturday 12 July 2025 13:46:25 +0000 (0:00:01.232) 0:00:05.157 ********* 2025-07-12 13:46:55.679142 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:46:55.679153 | orchestrator | 2025-07-12 13:46:55.679164 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 13:46:55.679174 | orchestrator | Saturday 12 July 2025 13:46:26 +0000 (0:00:00.822) 0:00:05.979 ********* 2025-07-12 13:46:55.679185 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:55.679196 | orchestrator | 2025-07-12 13:46:55.679207 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 13:46:55.679218 | orchestrator | Saturday 12 July 2025 13:46:26 +0000 (0:00:00.357) 0:00:06.337 ********* 2025-07-12 13:46:55.679229 | orchestrator | ok: [testbed-manager] 2025-07-12 13:46:55.679239 | orchestrator | 2025-07-12 13:46:55.679251 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:55.679262 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:46:55.679282 | orchestrator | 2025-07-12 13:46:55.679293 | orchestrator | 2025-07-12 13:46:55.679304 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:55.679315 | orchestrator | Saturday 12 July 2025 13:46:27 +0000 (0:00:00.275) 0:00:06.612 ********* 2025-07-12 13:46:55.679325 | orchestrator | =============================================================================== 2025-07-12 13:46:55.679336 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.23s 2025-07-12 13:46:55.679347 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.99s 2025-07-12 13:46:55.679358 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.82s 2025-07-12 13:46:55.679399 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.74s 2025-07-12 13:46:55.679411 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2025-07-12 13:46:55.679422 | orchestrator | Get home directory of operator user ------------------------------------- 0.72s 2025-07-12 13:46:55.679438 | orchestrator | Create .kube directory -------------------------------------------------- 0.60s 2025-07-12 13:46:55.679450 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.36s 2025-07-12 13:46:55.679461 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-07-12 13:46:55.679472 | orchestrator | 2025-07-12 13:46:55.679483 | orchestrator | 2025-07-12 13:46:55 | INFO  | Task dee37826-fa8f-46da-ad2a-afd552f0136e is in state SUCCESS 2025-07-12 13:46:55.679494 | orchestrator | 2025-07-12 13:46:55.679505 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:46:55.679516 | orchestrator | 2025-07-12 13:46:55.679527 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:46:55.679538 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.475) 0:00:00.475 ********* 2025-07-12 13:46:55.679549 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:55.679560 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:55.679570 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:55.679581 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:55.679592 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:55.679603 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:55.679614 | orchestrator | 2025-07-12 13:46:55.679625 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:46:55.679636 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:01.066) 0:00:01.542 ********* 2025-07-12 13:46:55.679647 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:55.679658 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:55.679668 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:55.679679 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:55.679690 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:55.679701 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 13:46:55.679712 | orchestrator | 2025-07-12 13:46:55.679723 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-07-12 13:46:55.679734 | orchestrator | 2025-07-12 13:46:55.679744 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-07-12 13:46:55.679755 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.909) 0:00:02.452 ********* 2025-07-12 13:46:55.679767 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:46:55.679780 | orchestrator | 2025-07-12 13:46:55.679791 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 13:46:55.679802 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:01.539) 0:00:03.992 ********* 2025-07-12 13:46:55.679821 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 13:46:55.679832 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 13:46:55.679843 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 13:46:55.679854 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 13:46:55.679865 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 13:46:55.679875 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 13:46:55.679916 | orchestrator | 2025-07-12 13:46:55.679927 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 13:46:55.679938 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:01.243) 0:00:05.235 ********* 2025-07-12 13:46:55.679949 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 13:46:55.679960 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 13:46:55.679971 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 13:46:55.679981 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 13:46:55.679992 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 13:46:55.680003 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 13:46:55.680014 | orchestrator | 2025-07-12 13:46:55.680025 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 13:46:55.680036 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:00:01.560) 0:00:06.796 ********* 2025-07-12 13:46:55.680047 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-07-12 13:46:55.680058 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-07-12 13:46:55.680069 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:55.680079 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-07-12 13:46:55.680090 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:55.680101 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-07-12 13:46:55.680112 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:55.680123 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-07-12 13:46:55.680134 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:55.680145 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:55.680155 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-07-12 13:46:55.680166 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:55.680177 | orchestrator | 2025-07-12 13:46:55.680188 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-07-12 13:46:55.680199 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:01.405) 0:00:08.202 ********* 2025-07-12 13:46:55.680217 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:55.680228 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:55.680239 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:55.680250 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:55.680261 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:55.680272 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:55.680283 | orchestrator | 2025-07-12 13:46:55.680299 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-07-12 13:46:55.680310 | orchestrator | Saturday 12 July 2025 13:45:53 +0000 (0:00:00.730) 0:00:08.933 ********* 2025-07-12 13:46:55.680366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680565 | orchestrator | 2025-07-12 13:46:55.680577 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-07-12 13:46:55.680596 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:02.246) 0:00:11.180 ********* 2025-07-12 13:46:55.680608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680679 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680709 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680794 | orchestrator | 2025-07-12 13:46:55.680805 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-07-12 13:46:55.680816 | orchestrator | Saturday 12 July 2025 13:45:59 +0000 (0:00:03.605) 0:00:14.786 ********* 2025-07-12 13:46:55.680828 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:55.680839 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:55.680850 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:55.680860 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:46:55.680871 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:46:55.680900 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:46:55.680911 | orchestrator | 2025-07-12 13:46:55.680922 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-07-12 13:46:55.680933 | orchestrator | Saturday 12 July 2025 13:46:00 +0000 (0:00:01.434) 0:00:16.221 ********* 2025-07-12 13:46:55.680944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.680992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.681012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.681024 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.681035 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.681046 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.681058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.681096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.681109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 13:46:55.681121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 13:46:55.681132 | orchestrator | 2025-07-12 13:46:55.681143 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:55.681154 | orchestrator | Saturday 12 July 2025 13:46:05 +0000 (0:00:04.738) 0:00:20.959 ********* 2025-07-12 13:46:55.681165 | orchestrator | 2025-07-12 13:46:55.681176 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:55.681187 | orchestrator | Saturday 12 July 2025 13:46:05 +0000 (0:00:00.566) 0:00:21.526 ********* 2025-07-12 13:46:55.681198 | orchestrator | 2025-07-12 13:46:55.681209 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:55.681220 | orchestrator | Saturday 12 July 2025 13:46:06 +0000 (0:00:00.301) 0:00:21.827 ********* 2025-07-12 13:46:55.681231 | orchestrator | 2025-07-12 13:46:55.681242 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:55.681252 | orchestrator | Saturday 12 July 2025 13:46:06 +0000 (0:00:00.255) 0:00:22.083 ********* 2025-07-12 13:46:55.681263 | orchestrator | 2025-07-12 13:46:55.681274 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:55.681285 | orchestrator | Saturday 12 July 2025 13:46:06 +0000 (0:00:00.322) 0:00:22.406 ********* 2025-07-12 13:46:55.681296 | orchestrator | 2025-07-12 13:46:55.681307 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 13:46:55.681318 | orchestrator | Saturday 12 July 2025 13:46:07 +0000 (0:00:00.396) 0:00:22.802 ********* 2025-07-12 13:46:55.681328 | orchestrator | 2025-07-12 13:46:55.681339 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-07-12 13:46:55.681350 | orchestrator | Saturday 12 July 2025 13:46:07 +0000 (0:00:00.616) 0:00:23.419 ********* 2025-07-12 13:46:55.681361 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:55.681372 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:55.681390 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:55.681401 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:55.681412 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:55.681422 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:55.681433 | orchestrator | 2025-07-12 13:46:55.681444 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-07-12 13:46:55.681455 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:12.453) 0:00:35.873 ********* 2025-07-12 13:46:55.681466 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:46:55.681476 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:46:55.681487 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:46:55.681498 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:46:55.681509 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:46:55.681520 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:46:55.681530 | orchestrator | 2025-07-12 13:46:55.681541 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 13:46:55.681552 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:02.056) 0:00:37.929 ********* 2025-07-12 13:46:55.681563 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:55.681574 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:55.681585 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:55.681596 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:55.681606 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:55.681617 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:55.681628 | orchestrator | 2025-07-12 13:46:55.681639 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-07-12 13:46:55.681649 | orchestrator | Saturday 12 July 2025 13:46:31 +0000 (0:00:09.549) 0:00:47.479 ********* 2025-07-12 13:46:55.681667 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-07-12 13:46:55.681679 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-07-12 13:46:55.681695 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-07-12 13:46:55.681706 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-07-12 13:46:55.681717 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-07-12 13:46:55.681728 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-07-12 13:46:55.681738 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-07-12 13:46:55.681749 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-07-12 13:46:55.681760 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-07-12 13:46:55.681771 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-07-12 13:46:55.681781 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-07-12 13:46:55.681792 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-07-12 13:46:55.681803 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:55.681814 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:55.681824 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:55.681835 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:55.681854 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:55.681864 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 13:46:55.681875 | orchestrator | 2025-07-12 13:46:55.681903 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-07-12 13:46:55.681914 | orchestrator | Saturday 12 July 2025 13:46:39 +0000 (0:00:07.956) 0:00:55.435 ********* 2025-07-12 13:46:55.681925 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-07-12 13:46:55.681936 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:55.681947 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-07-12 13:46:55.681958 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:55.681969 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-07-12 13:46:55.681979 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:55.681990 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-07-12 13:46:55.682001 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-07-12 13:46:55.682012 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-07-12 13:46:55.682111 | orchestrator | 2025-07-12 13:46:55.682123 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-07-12 13:46:55.682135 | orchestrator | Saturday 12 July 2025 13:46:42 +0000 (0:00:02.690) 0:00:58.125 ********* 2025-07-12 13:46:55.682146 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-07-12 13:46:55.682157 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:46:55.682168 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-07-12 13:46:55.682179 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:46:55.682190 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-07-12 13:46:55.682200 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:46:55.682211 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-07-12 13:46:55.682222 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-07-12 13:46:55.682233 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-07-12 13:46:55.682243 | orchestrator | 2025-07-12 13:46:55.682254 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 13:46:55.682265 | orchestrator | Saturday 12 July 2025 13:46:46 +0000 (0:00:03.971) 0:01:02.097 ********* 2025-07-12 13:46:55.682276 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:46:55.682287 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:46:55.682298 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:46:55.682308 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:46:55.682319 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:46:55.682330 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:46:55.682341 | orchestrator | 2025-07-12 13:46:55.682351 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:46:55.682362 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:46:55.682384 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:46:55.682401 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:46:55.682412 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:46:55.682424 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:46:55.682434 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:46:55.682453 | orchestrator | 2025-07-12 13:46:55.682465 | orchestrator | 2025-07-12 13:46:55.682475 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:46:55.682486 | orchestrator | Saturday 12 July 2025 13:46:54 +0000 (0:00:08.029) 0:01:10.126 ********* 2025-07-12 13:46:55.682497 | orchestrator | =============================================================================== 2025-07-12 13:46:55.682508 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.58s 2025-07-12 13:46:55.682519 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.45s 2025-07-12 13:46:55.682530 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.96s 2025-07-12 13:46:55.682541 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.74s 2025-07-12 13:46:55.682551 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.97s 2025-07-12 13:46:55.682562 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.61s 2025-07-12 13:46:55.682573 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.69s 2025-07-12 13:46:55.682584 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.46s 2025-07-12 13:46:55.682594 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.25s 2025-07-12 13:46:55.682605 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.06s 2025-07-12 13:46:55.682616 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.56s 2025-07-12 13:46:55.682627 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.54s 2025-07-12 13:46:55.682638 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.43s 2025-07-12 13:46:55.682648 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.41s 2025-07-12 13:46:55.682659 | orchestrator | module-load : Load modules ---------------------------------------------- 1.24s 2025-07-12 13:46:55.682670 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.07s 2025-07-12 13:46:55.682681 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2025-07-12 13:46:55.682692 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.73s 2025-07-12 13:46:55.682702 | orchestrator | 2025-07-12 13:46:55 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:55.682713 | orchestrator | 2025-07-12 13:46:55 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:55.682724 | orchestrator | 2025-07-12 13:46:55 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:55.682740 | orchestrator | 2025-07-12 13:46:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:46:58.719442 | orchestrator | 2025-07-12 13:46:58 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:46:58.723202 | orchestrator | 2025-07-12 13:46:58 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:46:58.723719 | orchestrator | 2025-07-12 13:46:58 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:46:58.724594 | orchestrator | 2025-07-12 13:46:58 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:46:58.724621 | orchestrator | 2025-07-12 13:46:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:01.761239 | orchestrator | 2025-07-12 13:47:01 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:01.761933 | orchestrator | 2025-07-12 13:47:01 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:01.762622 | orchestrator | 2025-07-12 13:47:01 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:01.763742 | orchestrator | 2025-07-12 13:47:01 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:01.763764 | orchestrator | 2025-07-12 13:47:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:04.808738 | orchestrator | 2025-07-12 13:47:04 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:04.809963 | orchestrator | 2025-07-12 13:47:04 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:04.810610 | orchestrator | 2025-07-12 13:47:04 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:04.811212 | orchestrator | 2025-07-12 13:47:04 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:04.811356 | orchestrator | 2025-07-12 13:47:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:07.847344 | orchestrator | 2025-07-12 13:47:07 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:07.847738 | orchestrator | 2025-07-12 13:47:07 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:07.848701 | orchestrator | 2025-07-12 13:47:07 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:07.849520 | orchestrator | 2025-07-12 13:47:07 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:07.849624 | orchestrator | 2025-07-12 13:47:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:10.886559 | orchestrator | 2025-07-12 13:47:10 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:10.887005 | orchestrator | 2025-07-12 13:47:10 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:10.887709 | orchestrator | 2025-07-12 13:47:10 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:10.889488 | orchestrator | 2025-07-12 13:47:10 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:10.889582 | orchestrator | 2025-07-12 13:47:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:13.923525 | orchestrator | 2025-07-12 13:47:13 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:13.923701 | orchestrator | 2025-07-12 13:47:13 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:13.923727 | orchestrator | 2025-07-12 13:47:13 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:13.924600 | orchestrator | 2025-07-12 13:47:13 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:13.924631 | orchestrator | 2025-07-12 13:47:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:16.962007 | orchestrator | 2025-07-12 13:47:16 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:16.962266 | orchestrator | 2025-07-12 13:47:16 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:16.963717 | orchestrator | 2025-07-12 13:47:16 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:16.966274 | orchestrator | 2025-07-12 13:47:16 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:16.967216 | orchestrator | 2025-07-12 13:47:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:20.023575 | orchestrator | 2025-07-12 13:47:20 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:20.024070 | orchestrator | 2025-07-12 13:47:20 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:20.025122 | orchestrator | 2025-07-12 13:47:20 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:20.026297 | orchestrator | 2025-07-12 13:47:20 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:20.027737 | orchestrator | 2025-07-12 13:47:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:23.064480 | orchestrator | 2025-07-12 13:47:23 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:23.065420 | orchestrator | 2025-07-12 13:47:23 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:23.066157 | orchestrator | 2025-07-12 13:47:23 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:23.067028 | orchestrator | 2025-07-12 13:47:23 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:23.067064 | orchestrator | 2025-07-12 13:47:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:26.104029 | orchestrator | 2025-07-12 13:47:26 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:26.105526 | orchestrator | 2025-07-12 13:47:26 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:26.107481 | orchestrator | 2025-07-12 13:47:26 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:26.108577 | orchestrator | 2025-07-12 13:47:26 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:26.108601 | orchestrator | 2025-07-12 13:47:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:29.144596 | orchestrator | 2025-07-12 13:47:29 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:29.146681 | orchestrator | 2025-07-12 13:47:29 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:29.148248 | orchestrator | 2025-07-12 13:47:29 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:29.149376 | orchestrator | 2025-07-12 13:47:29 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:29.149743 | orchestrator | 2025-07-12 13:47:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:32.187717 | orchestrator | 2025-07-12 13:47:32 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:32.190323 | orchestrator | 2025-07-12 13:47:32 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:32.193419 | orchestrator | 2025-07-12 13:47:32 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:32.195998 | orchestrator | 2025-07-12 13:47:32 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:32.196029 | orchestrator | 2025-07-12 13:47:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:35.235026 | orchestrator | 2025-07-12 13:47:35 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:35.235153 | orchestrator | 2025-07-12 13:47:35 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:35.235619 | orchestrator | 2025-07-12 13:47:35 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:35.237712 | orchestrator | 2025-07-12 13:47:35 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:35.237768 | orchestrator | 2025-07-12 13:47:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:38.280362 | orchestrator | 2025-07-12 13:47:38 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:38.286227 | orchestrator | 2025-07-12 13:47:38 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:38.291222 | orchestrator | 2025-07-12 13:47:38 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:38.291868 | orchestrator | 2025-07-12 13:47:38 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:38.292707 | orchestrator | 2025-07-12 13:47:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:41.345046 | orchestrator | 2025-07-12 13:47:41 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:41.345513 | orchestrator | 2025-07-12 13:47:41 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:41.351167 | orchestrator | 2025-07-12 13:47:41 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:41.352535 | orchestrator | 2025-07-12 13:47:41 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:41.352642 | orchestrator | 2025-07-12 13:47:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:44.419901 | orchestrator | 2025-07-12 13:47:44 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:44.420001 | orchestrator | 2025-07-12 13:47:44 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:44.420016 | orchestrator | 2025-07-12 13:47:44 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:44.420027 | orchestrator | 2025-07-12 13:47:44 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:44.420039 | orchestrator | 2025-07-12 13:47:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:47.457182 | orchestrator | 2025-07-12 13:47:47 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:47.457543 | orchestrator | 2025-07-12 13:47:47 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:47.458490 | orchestrator | 2025-07-12 13:47:47 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:47.459513 | orchestrator | 2025-07-12 13:47:47 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:47.459537 | orchestrator | 2025-07-12 13:47:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:50.504278 | orchestrator | 2025-07-12 13:47:50 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:50.506256 | orchestrator | 2025-07-12 13:47:50 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:50.507928 | orchestrator | 2025-07-12 13:47:50 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:50.510192 | orchestrator | 2025-07-12 13:47:50 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:50.511103 | orchestrator | 2025-07-12 13:47:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:53.554461 | orchestrator | 2025-07-12 13:47:53 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:53.556442 | orchestrator | 2025-07-12 13:47:53 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:53.559131 | orchestrator | 2025-07-12 13:47:53 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:53.561327 | orchestrator | 2025-07-12 13:47:53 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:53.561903 | orchestrator | 2025-07-12 13:47:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:56.599108 | orchestrator | 2025-07-12 13:47:56 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:56.600898 | orchestrator | 2025-07-12 13:47:56 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:56.602303 | orchestrator | 2025-07-12 13:47:56 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:56.603510 | orchestrator | 2025-07-12 13:47:56 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:56.603756 | orchestrator | 2025-07-12 13:47:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:47:59.664279 | orchestrator | 2025-07-12 13:47:59 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:47:59.666148 | orchestrator | 2025-07-12 13:47:59 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:47:59.668301 | orchestrator | 2025-07-12 13:47:59 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:47:59.670333 | orchestrator | 2025-07-12 13:47:59 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:47:59.670357 | orchestrator | 2025-07-12 13:47:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:02.730248 | orchestrator | 2025-07-12 13:48:02 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:02.731865 | orchestrator | 2025-07-12 13:48:02 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:48:02.733242 | orchestrator | 2025-07-12 13:48:02 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:02.735084 | orchestrator | 2025-07-12 13:48:02 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:02.735176 | orchestrator | 2025-07-12 13:48:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:05.782920 | orchestrator | 2025-07-12 13:48:05 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:05.784197 | orchestrator | 2025-07-12 13:48:05 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:48:05.785951 | orchestrator | 2025-07-12 13:48:05 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:05.788430 | orchestrator | 2025-07-12 13:48:05 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:05.788456 | orchestrator | 2025-07-12 13:48:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:08.840010 | orchestrator | 2025-07-12 13:48:08 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:08.843290 | orchestrator | 2025-07-12 13:48:08 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:48:08.844360 | orchestrator | 2025-07-12 13:48:08 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:08.845802 | orchestrator | 2025-07-12 13:48:08 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:08.846148 | orchestrator | 2025-07-12 13:48:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:11.897870 | orchestrator | 2025-07-12 13:48:11 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:11.900471 | orchestrator | 2025-07-12 13:48:11 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:48:11.903520 | orchestrator | 2025-07-12 13:48:11 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:11.905847 | orchestrator | 2025-07-12 13:48:11 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:11.905875 | orchestrator | 2025-07-12 13:48:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:14.948401 | orchestrator | 2025-07-12 13:48:14 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:14.950409 | orchestrator | 2025-07-12 13:48:14 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:48:14.951635 | orchestrator | 2025-07-12 13:48:14 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:14.955353 | orchestrator | 2025-07-12 13:48:14 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:14.955379 | orchestrator | 2025-07-12 13:48:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:17.998270 | orchestrator | 2025-07-12 13:48:17 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:17.998408 | orchestrator | 2025-07-12 13:48:17 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:48:17.998489 | orchestrator | 2025-07-12 13:48:17 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:17.999516 | orchestrator | 2025-07-12 13:48:17 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:17.999571 | orchestrator | 2025-07-12 13:48:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:21.033303 | orchestrator | 2025-07-12 13:48:21 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:21.033789 | orchestrator | 2025-07-12 13:48:21 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:48:21.034634 | orchestrator | 2025-07-12 13:48:21 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:21.035447 | orchestrator | 2025-07-12 13:48:21 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:21.035470 | orchestrator | 2025-07-12 13:48:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:24.076100 | orchestrator | 2025-07-12 13:48:24 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:24.077907 | orchestrator | 2025-07-12 13:48:24 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:48:24.080566 | orchestrator | 2025-07-12 13:48:24 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:24.081850 | orchestrator | 2025-07-12 13:48:24 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:24.082098 | orchestrator | 2025-07-12 13:48:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:27.119851 | orchestrator | 2025-07-12 13:48:27 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:27.120017 | orchestrator | 2025-07-12 13:48:27 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state STARTED 2025-07-12 13:48:27.120837 | orchestrator | 2025-07-12 13:48:27 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:27.125692 | orchestrator | 2025-07-12 13:48:27 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:27.125799 | orchestrator | 2025-07-12 13:48:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:30.157364 | orchestrator | 2025-07-12 13:48:30 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:30.158069 | orchestrator | 2025-07-12 13:48:30 | INFO  | Task 9f3ceac0-1d5f-438f-b58a-5c208d6699b1 is in state SUCCESS 2025-07-12 13:48:30.159372 | orchestrator | 2025-07-12 13:48:30.159415 | orchestrator | 2025-07-12 13:48:30.159427 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-07-12 13:48:30.159440 | orchestrator | 2025-07-12 13:48:30.159451 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 13:48:30.159463 | orchestrator | Saturday 12 July 2025 13:46:09 +0000 (0:00:00.269) 0:00:00.269 ********* 2025-07-12 13:48:30.159474 | orchestrator | ok: [localhost] => { 2025-07-12 13:48:30.159487 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-07-12 13:48:30.159499 | orchestrator | } 2025-07-12 13:48:30.159510 | orchestrator | 2025-07-12 13:48:30.159521 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-07-12 13:48:30.159532 | orchestrator | Saturday 12 July 2025 13:46:09 +0000 (0:00:00.165) 0:00:00.435 ********* 2025-07-12 13:48:30.159544 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-07-12 13:48:30.159557 | orchestrator | ...ignoring 2025-07-12 13:48:30.159568 | orchestrator | 2025-07-12 13:48:30.159579 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-07-12 13:48:30.159590 | orchestrator | Saturday 12 July 2025 13:46:13 +0000 (0:00:03.619) 0:00:04.054 ********* 2025-07-12 13:48:30.159601 | orchestrator | skipping: [localhost] 2025-07-12 13:48:30.159655 | orchestrator | 2025-07-12 13:48:30.159669 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-07-12 13:48:30.159724 | orchestrator | Saturday 12 July 2025 13:46:13 +0000 (0:00:00.102) 0:00:04.156 ********* 2025-07-12 13:48:30.159736 | orchestrator | ok: [localhost] 2025-07-12 13:48:30.159774 | orchestrator | 2025-07-12 13:48:30.159803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:48:30.159814 | orchestrator | 2025-07-12 13:48:30.159825 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:48:30.159835 | orchestrator | Saturday 12 July 2025 13:46:13 +0000 (0:00:00.171) 0:00:04.328 ********* 2025-07-12 13:48:30.159846 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:30.159857 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:48:30.159867 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:48:30.159878 | orchestrator | 2025-07-12 13:48:30.159889 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:48:30.159899 | orchestrator | Saturday 12 July 2025 13:46:13 +0000 (0:00:00.433) 0:00:04.762 ********* 2025-07-12 13:48:30.159910 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-07-12 13:48:30.159955 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-07-12 13:48:30.159968 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-07-12 13:48:30.159980 | orchestrator | 2025-07-12 13:48:30.159992 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-07-12 13:48:30.160004 | orchestrator | 2025-07-12 13:48:30.160017 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 13:48:30.160030 | orchestrator | Saturday 12 July 2025 13:46:14 +0000 (0:00:00.612) 0:00:05.374 ********* 2025-07-12 13:48:30.160042 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:48:30.160054 | orchestrator | 2025-07-12 13:48:30.160066 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 13:48:30.160079 | orchestrator | Saturday 12 July 2025 13:46:15 +0000 (0:00:00.740) 0:00:06.115 ********* 2025-07-12 13:48:30.160115 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:30.160128 | orchestrator | 2025-07-12 13:48:30.160140 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-07-12 13:48:30.160152 | orchestrator | Saturday 12 July 2025 13:46:16 +0000 (0:00:00.912) 0:00:07.027 ********* 2025-07-12 13:48:30.160164 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:30.160177 | orchestrator | 2025-07-12 13:48:30.160189 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-07-12 13:48:30.160201 | orchestrator | Saturday 12 July 2025 13:46:16 +0000 (0:00:00.351) 0:00:07.379 ********* 2025-07-12 13:48:30.160212 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:30.160224 | orchestrator | 2025-07-12 13:48:30.160236 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-07-12 13:48:30.160249 | orchestrator | Saturday 12 July 2025 13:46:16 +0000 (0:00:00.302) 0:00:07.681 ********* 2025-07-12 13:48:30.160261 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:30.160273 | orchestrator | 2025-07-12 13:48:30.160284 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-07-12 13:48:30.160297 | orchestrator | Saturday 12 July 2025 13:46:17 +0000 (0:00:00.320) 0:00:08.001 ********* 2025-07-12 13:48:30.160310 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:30.160322 | orchestrator | 2025-07-12 13:48:30.160333 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 13:48:30.160343 | orchestrator | Saturday 12 July 2025 13:46:17 +0000 (0:00:00.480) 0:00:08.482 ********* 2025-07-12 13:48:30.160354 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-07-12 13:48:30.160365 | orchestrator | 2025-07-12 13:48:30.160376 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 13:48:30.160387 | orchestrator | Saturday 12 July 2025 13:46:19 +0000 (0:00:01.624) 0:00:10.106 ********* 2025-07-12 13:48:30.160397 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:30.160408 | orchestrator | 2025-07-12 13:48:30.160419 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-07-12 13:48:30.160430 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:01.504) 0:00:11.610 ********* 2025-07-12 13:48:30.160440 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:30.160451 | orchestrator | 2025-07-12 13:48:30.160462 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-07-12 13:48:30.160473 | orchestrator | Saturday 12 July 2025 13:46:21 +0000 (0:00:00.639) 0:00:12.249 ********* 2025-07-12 13:48:30.160483 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:30.160494 | orchestrator | 2025-07-12 13:48:30.160516 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-07-12 13:48:30.160528 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:00.977) 0:00:13.226 ********* 2025-07-12 13:48:30.160545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:30.160569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:30.160591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:30.160604 | orchestrator | 2025-07-12 13:48:30.160615 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-07-12 13:48:30.160626 | orchestrator | Saturday 12 July 2025 13:46:24 +0000 (0:00:01.792) 0:00:15.019 ********* 2025-07-12 13:48:30.160647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:30.160665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:30.160685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:30.160697 | orchestrator | 2025-07-12 13:48:30.160708 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-07-12 13:48:30.160719 | orchestrator | Saturday 12 July 2025 13:46:26 +0000 (0:00:02.598) 0:00:17.617 ********* 2025-07-12 13:48:30.160730 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 13:48:30.160741 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 13:48:30.160797 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 13:48:30.160817 | orchestrator | 2025-07-12 13:48:30.160835 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-07-12 13:48:30.160851 | orchestrator | Saturday 12 July 2025 13:46:28 +0000 (0:00:01.728) 0:00:19.346 ********* 2025-07-12 13:48:30.160862 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 13:48:30.160872 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 13:48:30.160883 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 13:48:30.160894 | orchestrator | 2025-07-12 13:48:30.160904 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-07-12 13:48:30.160915 | orchestrator | Saturday 12 July 2025 13:46:30 +0000 (0:00:02.064) 0:00:21.410 ********* 2025-07-12 13:48:30.160926 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 13:48:30.160937 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 13:48:30.160947 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 13:48:30.160958 | orchestrator | 2025-07-12 13:48:30.160969 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-07-12 13:48:30.160980 | orchestrator | Saturday 12 July 2025 13:46:32 +0000 (0:00:01.560) 0:00:22.971 ********* 2025-07-12 13:48:30.160999 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 13:48:30.161010 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 13:48:30.161021 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 13:48:30.161032 | orchestrator | 2025-07-12 13:48:30.161043 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-07-12 13:48:30.161061 | orchestrator | Saturday 12 July 2025 13:46:34 +0000 (0:00:02.328) 0:00:25.299 ********* 2025-07-12 13:48:30.161072 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 13:48:30.161083 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 13:48:30.161094 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 13:48:30.161104 | orchestrator | 2025-07-12 13:48:30.161115 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-07-12 13:48:30.161126 | orchestrator | Saturday 12 July 2025 13:46:36 +0000 (0:00:01.890) 0:00:27.190 ********* 2025-07-12 13:48:30.161136 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 13:48:30.161147 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 13:48:30.161158 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 13:48:30.161169 | orchestrator | 2025-07-12 13:48:30.161186 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 13:48:30.161197 | orchestrator | Saturday 12 July 2025 13:46:38 +0000 (0:00:01.883) 0:00:29.073 ********* 2025-07-12 13:48:30.161208 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:30.161219 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:48:30.161230 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:48:30.161240 | orchestrator | 2025-07-12 13:48:30.161251 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-07-12 13:48:30.161262 | orchestrator | Saturday 12 July 2025 13:46:38 +0000 (0:00:00.400) 0:00:29.473 ********* 2025-07-12 13:48:30.161274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:30.161287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:30.161315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:48:30.161328 | orchestrator | 2025-07-12 13:48:30.161338 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-07-12 13:48:30.161349 | orchestrator | Saturday 12 July 2025 13:46:40 +0000 (0:00:01.683) 0:00:31.157 ********* 2025-07-12 13:48:30.161360 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:30.161371 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:30.161381 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:30.161392 | orchestrator | 2025-07-12 13:48:30.161403 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-07-12 13:48:30.161413 | orchestrator | Saturday 12 July 2025 13:46:41 +0000 (0:00:00.955) 0:00:32.113 ********* 2025-07-12 13:48:30.161424 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:30.161439 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:30.161451 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:30.161461 | orchestrator | 2025-07-12 13:48:30.161472 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-07-12 13:48:30.161483 | orchestrator | Saturday 12 July 2025 13:46:49 +0000 (0:00:08.046) 0:00:40.159 ********* 2025-07-12 13:48:30.161493 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:30.161555 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:30.161661 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:30.161675 | orchestrator | 2025-07-12 13:48:30.161686 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 13:48:30.161696 | orchestrator | 2025-07-12 13:48:30.161707 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 13:48:30.161718 | orchestrator | Saturday 12 July 2025 13:46:49 +0000 (0:00:00.355) 0:00:40.514 ********* 2025-07-12 13:48:30.161729 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:30.161739 | orchestrator | 2025-07-12 13:48:30.161786 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 13:48:30.161798 | orchestrator | Saturday 12 July 2025 13:46:50 +0000 (0:00:00.618) 0:00:41.133 ********* 2025-07-12 13:48:30.161809 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:48:30.161820 | orchestrator | 2025-07-12 13:48:30.161831 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 13:48:30.161842 | orchestrator | Saturday 12 July 2025 13:46:50 +0000 (0:00:00.254) 0:00:41.388 ********* 2025-07-12 13:48:30.161853 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:30.161864 | orchestrator | 2025-07-12 13:48:30.161875 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 13:48:30.161885 | orchestrator | Saturday 12 July 2025 13:46:52 +0000 (0:00:01.860) 0:00:43.249 ********* 2025-07-12 13:48:30.161896 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:48:30.161907 | orchestrator | 2025-07-12 13:48:30.161918 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 13:48:30.161929 | orchestrator | 2025-07-12 13:48:30.161940 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 13:48:30.161959 | orchestrator | Saturday 12 July 2025 13:47:47 +0000 (0:00:55.442) 0:01:38.692 ********* 2025-07-12 13:48:30.161970 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:48:30.161981 | orchestrator | 2025-07-12 13:48:30.161992 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 13:48:30.162003 | orchestrator | Saturday 12 July 2025 13:47:48 +0000 (0:00:00.611) 0:01:39.303 ********* 2025-07-12 13:48:30.162062 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:48:30.162076 | orchestrator | 2025-07-12 13:48:30.162087 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 13:48:30.162098 | orchestrator | Saturday 12 July 2025 13:47:48 +0000 (0:00:00.449) 0:01:39.753 ********* 2025-07-12 13:48:30.162109 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:30.162120 | orchestrator | 2025-07-12 13:48:30.162131 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 13:48:30.162141 | orchestrator | Saturday 12 July 2025 13:47:50 +0000 (0:00:01.780) 0:01:41.533 ********* 2025-07-12 13:48:30.162163 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:48:30.162174 | orchestrator | 2025-07-12 13:48:30.162185 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 13:48:30.162196 | orchestrator | 2025-07-12 13:48:30.162207 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 13:48:30.162218 | orchestrator | Saturday 12 July 2025 13:48:05 +0000 (0:00:14.779) 0:01:56.313 ********* 2025-07-12 13:48:30.162229 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:48:30.162240 | orchestrator | 2025-07-12 13:48:30.162251 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 13:48:30.162262 | orchestrator | Saturday 12 July 2025 13:48:06 +0000 (0:00:00.608) 0:01:56.922 ********* 2025-07-12 13:48:30.162273 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:48:30.162283 | orchestrator | 2025-07-12 13:48:30.162294 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 13:48:30.162305 | orchestrator | Saturday 12 July 2025 13:48:06 +0000 (0:00:00.214) 0:01:57.137 ********* 2025-07-12 13:48:30.162316 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:30.162327 | orchestrator | 2025-07-12 13:48:30.162338 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 13:48:30.162357 | orchestrator | Saturday 12 July 2025 13:48:08 +0000 (0:00:01.884) 0:01:59.022 ********* 2025-07-12 13:48:30.162369 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:48:30.162380 | orchestrator | 2025-07-12 13:48:30.162391 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-07-12 13:48:30.162402 | orchestrator | 2025-07-12 13:48:30.162413 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-07-12 13:48:30.162424 | orchestrator | Saturday 12 July 2025 13:48:24 +0000 (0:00:16.134) 0:02:15.156 ********* 2025-07-12 13:48:30.162434 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:48:30.162445 | orchestrator | 2025-07-12 13:48:30.162456 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-07-12 13:48:30.162467 | orchestrator | Saturday 12 July 2025 13:48:24 +0000 (0:00:00.661) 0:02:15.818 ********* 2025-07-12 13:48:30.162478 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 13:48:30.162489 | orchestrator | enable_outward_rabbitmq_True 2025-07-12 13:48:30.162500 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 13:48:30.162511 | orchestrator | outward_rabbitmq_restart 2025-07-12 13:48:30.162522 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:48:30.162532 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:48:30.162543 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:48:30.162554 | orchestrator | 2025-07-12 13:48:30.162565 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-07-12 13:48:30.162576 | orchestrator | skipping: no hosts matched 2025-07-12 13:48:30.162587 | orchestrator | 2025-07-12 13:48:30.162598 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-07-12 13:48:30.162616 | orchestrator | skipping: no hosts matched 2025-07-12 13:48:30.162627 | orchestrator | 2025-07-12 13:48:30.162638 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-07-12 13:48:30.162655 | orchestrator | skipping: no hosts matched 2025-07-12 13:48:30.162666 | orchestrator | 2025-07-12 13:48:30.162677 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:48:30.162689 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 13:48:30.162701 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 13:48:30.162712 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:48:30.162723 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:48:30.162734 | orchestrator | 2025-07-12 13:48:30.162804 | orchestrator | 2025-07-12 13:48:30.162819 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:48:30.162830 | orchestrator | Saturday 12 July 2025 13:48:27 +0000 (0:00:02.742) 0:02:18.560 ********* 2025-07-12 13:48:30.162841 | orchestrator | =============================================================================== 2025-07-12 13:48:30.162852 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.36s 2025-07-12 13:48:30.162863 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.05s 2025-07-12 13:48:30.162873 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.53s 2025-07-12 13:48:30.162884 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.62s 2025-07-12 13:48:30.162895 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.74s 2025-07-12 13:48:30.162906 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.60s 2025-07-12 13:48:30.162917 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.33s 2025-07-12 13:48:30.162928 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.06s 2025-07-12 13:48:30.162939 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.89s 2025-07-12 13:48:30.162950 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.88s 2025-07-12 13:48:30.162960 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.84s 2025-07-12 13:48:30.162971 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.79s 2025-07-12 13:48:30.162982 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.73s 2025-07-12 13:48:30.162993 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.68s 2025-07-12 13:48:30.163004 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.62s 2025-07-12 13:48:30.163015 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.56s 2025-07-12 13:48:30.163026 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.50s 2025-07-12 13:48:30.163036 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 0.98s 2025-07-12 13:48:30.163047 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.96s 2025-07-12 13:48:30.163058 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.92s 2025-07-12 13:48:30.163069 | orchestrator | 2025-07-12 13:48:30 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:30.163080 | orchestrator | 2025-07-12 13:48:30 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:30.163111 | orchestrator | 2025-07-12 13:48:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:33.202850 | orchestrator | 2025-07-12 13:48:33 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:33.204082 | orchestrator | 2025-07-12 13:48:33 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:33.205476 | orchestrator | 2025-07-12 13:48:33 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:33.205796 | orchestrator | 2025-07-12 13:48:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:36.245887 | orchestrator | 2025-07-12 13:48:36 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:36.246310 | orchestrator | 2025-07-12 13:48:36 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:36.247246 | orchestrator | 2025-07-12 13:48:36 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:36.247286 | orchestrator | 2025-07-12 13:48:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:39.278108 | orchestrator | 2025-07-12 13:48:39 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:39.278542 | orchestrator | 2025-07-12 13:48:39 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:39.279582 | orchestrator | 2025-07-12 13:48:39 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:39.279612 | orchestrator | 2025-07-12 13:48:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:42.321956 | orchestrator | 2025-07-12 13:48:42 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:42.322112 | orchestrator | 2025-07-12 13:48:42 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:42.322129 | orchestrator | 2025-07-12 13:48:42 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:42.322141 | orchestrator | 2025-07-12 13:48:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:45.363105 | orchestrator | 2025-07-12 13:48:45 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:45.363317 | orchestrator | 2025-07-12 13:48:45 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:45.363958 | orchestrator | 2025-07-12 13:48:45 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:45.364596 | orchestrator | 2025-07-12 13:48:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:48.407929 | orchestrator | 2025-07-12 13:48:48 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:48.408888 | orchestrator | 2025-07-12 13:48:48 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:48.410424 | orchestrator | 2025-07-12 13:48:48 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:48.410453 | orchestrator | 2025-07-12 13:48:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:51.463446 | orchestrator | 2025-07-12 13:48:51 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:51.464850 | orchestrator | 2025-07-12 13:48:51 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:51.466484 | orchestrator | 2025-07-12 13:48:51 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:51.466932 | orchestrator | 2025-07-12 13:48:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:54.505993 | orchestrator | 2025-07-12 13:48:54 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:54.507316 | orchestrator | 2025-07-12 13:48:54 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:54.509973 | orchestrator | 2025-07-12 13:48:54 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:54.510001 | orchestrator | 2025-07-12 13:48:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:48:57.552335 | orchestrator | 2025-07-12 13:48:57 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:48:57.553577 | orchestrator | 2025-07-12 13:48:57 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:48:57.554899 | orchestrator | 2025-07-12 13:48:57 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:48:57.555076 | orchestrator | 2025-07-12 13:48:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:00.593131 | orchestrator | 2025-07-12 13:49:00 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:00.594396 | orchestrator | 2025-07-12 13:49:00 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:00.596401 | orchestrator | 2025-07-12 13:49:00 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:00.596677 | orchestrator | 2025-07-12 13:49:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:03.634322 | orchestrator | 2025-07-12 13:49:03 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:03.635393 | orchestrator | 2025-07-12 13:49:03 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:03.635795 | orchestrator | 2025-07-12 13:49:03 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:03.635820 | orchestrator | 2025-07-12 13:49:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:06.673029 | orchestrator | 2025-07-12 13:49:06 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:06.673120 | orchestrator | 2025-07-12 13:49:06 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:06.673909 | orchestrator | 2025-07-12 13:49:06 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:06.673932 | orchestrator | 2025-07-12 13:49:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:09.710764 | orchestrator | 2025-07-12 13:49:09 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:09.711185 | orchestrator | 2025-07-12 13:49:09 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:09.711603 | orchestrator | 2025-07-12 13:49:09 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:09.711641 | orchestrator | 2025-07-12 13:49:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:12.760079 | orchestrator | 2025-07-12 13:49:12 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:12.763141 | orchestrator | 2025-07-12 13:49:12 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:12.764837 | orchestrator | 2025-07-12 13:49:12 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:12.764865 | orchestrator | 2025-07-12 13:49:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:15.806272 | orchestrator | 2025-07-12 13:49:15 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:15.808140 | orchestrator | 2025-07-12 13:49:15 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:15.810072 | orchestrator | 2025-07-12 13:49:15 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:15.810097 | orchestrator | 2025-07-12 13:49:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:18.847952 | orchestrator | 2025-07-12 13:49:18 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:18.849701 | orchestrator | 2025-07-12 13:49:18 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:18.851576 | orchestrator | 2025-07-12 13:49:18 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:18.851633 | orchestrator | 2025-07-12 13:49:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:21.904530 | orchestrator | 2025-07-12 13:49:21 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:21.908820 | orchestrator | 2025-07-12 13:49:21 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:21.910947 | orchestrator | 2025-07-12 13:49:21 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:21.911262 | orchestrator | 2025-07-12 13:49:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:24.957143 | orchestrator | 2025-07-12 13:49:24 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:24.958838 | orchestrator | 2025-07-12 13:49:24 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:24.960187 | orchestrator | 2025-07-12 13:49:24 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:24.960829 | orchestrator | 2025-07-12 13:49:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:27.997277 | orchestrator | 2025-07-12 13:49:27 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:27.998163 | orchestrator | 2025-07-12 13:49:27 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:27.999575 | orchestrator | 2025-07-12 13:49:27 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:27.999761 | orchestrator | 2025-07-12 13:49:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:31.037645 | orchestrator | 2025-07-12 13:49:31 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:31.038131 | orchestrator | 2025-07-12 13:49:31 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:31.039468 | orchestrator | 2025-07-12 13:49:31 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:31.039496 | orchestrator | 2025-07-12 13:49:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:34.072472 | orchestrator | 2025-07-12 13:49:34 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:34.075303 | orchestrator | 2025-07-12 13:49:34 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state STARTED 2025-07-12 13:49:34.075348 | orchestrator | 2025-07-12 13:49:34 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:34.075361 | orchestrator | 2025-07-12 13:49:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:37.115616 | orchestrator | 2025-07-12 13:49:37 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:37.119196 | orchestrator | 2025-07-12 13:49:37 | INFO  | Task 991a97c3-7338-4619-baa2-2937ad229c54 is in state SUCCESS 2025-07-12 13:49:37.120363 | orchestrator | 2025-07-12 13:49:37.120402 | orchestrator | 2025-07-12 13:49:37.120416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:49:37.120429 | orchestrator | 2025-07-12 13:49:37.120441 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:49:37.120453 | orchestrator | Saturday 12 July 2025 13:46:59 +0000 (0:00:00.193) 0:00:00.193 ********* 2025-07-12 13:49:37.120465 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:49:37.120478 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:49:37.120489 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:49:37.120501 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.120512 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.120524 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.120588 | orchestrator | 2025-07-12 13:49:37.120601 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:49:37.120612 | orchestrator | Saturday 12 July 2025 13:47:00 +0000 (0:00:00.828) 0:00:01.022 ********* 2025-07-12 13:49:37.120726 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-07-12 13:49:37.120739 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-07-12 13:49:37.120750 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-07-12 13:49:37.120761 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-07-12 13:49:37.120772 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-07-12 13:49:37.120783 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-07-12 13:49:37.120795 | orchestrator | 2025-07-12 13:49:37.120806 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-07-12 13:49:37.120817 | orchestrator | 2025-07-12 13:49:37.120828 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-07-12 13:49:37.120839 | orchestrator | Saturday 12 July 2025 13:47:01 +0000 (0:00:01.115) 0:00:02.138 ********* 2025-07-12 13:49:37.120851 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:49:37.120863 | orchestrator | 2025-07-12 13:49:37.120874 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-07-12 13:49:37.120885 | orchestrator | Saturday 12 July 2025 13:47:02 +0000 (0:00:01.330) 0:00:03.468 ********* 2025-07-12 13:49:37.120898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.120913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.120924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.120936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.120982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.120994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121006 | orchestrator | 2025-07-12 13:49:37.121031 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-07-12 13:49:37.121043 | orchestrator | Saturday 12 July 2025 13:47:04 +0000 (0:00:01.455) 0:00:04.924 ********* 2025-07-12 13:49:37.121054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121130 | orchestrator | 2025-07-12 13:49:37.121141 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-07-12 13:49:37.121152 | orchestrator | Saturday 12 July 2025 13:47:05 +0000 (0:00:01.638) 0:00:06.562 ********* 2025-07-12 13:49:37.121163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121348 | orchestrator | 2025-07-12 13:49:37.121359 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-07-12 13:49:37.121370 | orchestrator | Saturday 12 July 2025 13:47:07 +0000 (0:00:01.385) 0:00:07.948 ********* 2025-07-12 13:49:37.121382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121464 | orchestrator | 2025-07-12 13:49:37.121483 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-07-12 13:49:37.121494 | orchestrator | Saturday 12 July 2025 13:47:08 +0000 (0:00:01.752) 0:00:09.700 ********* 2025-07-12 13:49:37.121506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121540 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.121580 | orchestrator | 2025-07-12 13:49:37.121592 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-07-12 13:49:37.121604 | orchestrator | Saturday 12 July 2025 13:47:10 +0000 (0:00:02.071) 0:00:11.772 ********* 2025-07-12 13:49:37.121615 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:49:37.121626 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:49:37.121637 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:37.121648 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:49:37.121693 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:37.121706 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:37.121716 | orchestrator | 2025-07-12 13:49:37.121727 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-07-12 13:49:37.121738 | orchestrator | Saturday 12 July 2025 13:47:13 +0000 (0:00:02.681) 0:00:14.453 ********* 2025-07-12 13:49:37.121749 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-07-12 13:49:37.121761 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-07-12 13:49:37.121772 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-07-12 13:49:37.121787 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-07-12 13:49:37.121798 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-07-12 13:49:37.121809 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-07-12 13:49:37.121820 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:37.121831 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:37.121848 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:37.121860 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:37.121871 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:37.121882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 13:49:37.121893 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:37.121905 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:37.121916 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:37.121927 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:37.121938 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:37.121956 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 13:49:37.121967 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:37.121979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:37.121990 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:37.122000 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:37.122011 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:37.122096 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 13:49:37.122120 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:37.122132 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:37.122142 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:37.122153 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:37.122164 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:37.122174 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 13:49:37.122186 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:37.122196 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:37.122207 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:37.122218 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:37.122229 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:37.122240 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 13:49:37.122251 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 13:49:37.122261 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 13:49:37.122272 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 13:49:37.122283 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 13:49:37.122294 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 13:49:37.122310 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 13:49:37.122321 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-07-12 13:49:37.122333 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-07-12 13:49:37.122351 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-07-12 13:49:37.122362 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-07-12 13:49:37.122382 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-07-12 13:49:37.122393 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-07-12 13:49:37.122404 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 13:49:37.122415 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 13:49:37.122426 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 13:49:37.122437 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 13:49:37.122448 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 13:49:37.122458 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 13:49:37.122469 | orchestrator | 2025-07-12 13:49:37.122480 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:37.122491 | orchestrator | Saturday 12 July 2025 13:47:33 +0000 (0:00:19.894) 0:00:34.348 ********* 2025-07-12 13:49:37.122502 | orchestrator | 2025-07-12 13:49:37.122513 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:37.122524 | orchestrator | Saturday 12 July 2025 13:47:33 +0000 (0:00:00.066) 0:00:34.414 ********* 2025-07-12 13:49:37.122534 | orchestrator | 2025-07-12 13:49:37.122545 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:37.122556 | orchestrator | Saturday 12 July 2025 13:47:33 +0000 (0:00:00.063) 0:00:34.478 ********* 2025-07-12 13:49:37.122567 | orchestrator | 2025-07-12 13:49:37.122578 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:37.122588 | orchestrator | Saturday 12 July 2025 13:47:33 +0000 (0:00:00.064) 0:00:34.543 ********* 2025-07-12 13:49:37.122599 | orchestrator | 2025-07-12 13:49:37.122610 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:37.122621 | orchestrator | Saturday 12 July 2025 13:47:33 +0000 (0:00:00.063) 0:00:34.606 ********* 2025-07-12 13:49:37.122631 | orchestrator | 2025-07-12 13:49:37.122642 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 13:49:37.122653 | orchestrator | Saturday 12 July 2025 13:47:33 +0000 (0:00:00.062) 0:00:34.668 ********* 2025-07-12 13:49:37.122697 | orchestrator | 2025-07-12 13:49:37.122708 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-07-12 13:49:37.122719 | orchestrator | Saturday 12 July 2025 13:47:33 +0000 (0:00:00.066) 0:00:34.735 ********* 2025-07-12 13:49:37.122730 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:49:37.122742 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:49:37.122753 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.122764 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.122774 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:49:37.122785 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.122796 | orchestrator | 2025-07-12 13:49:37.122807 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-07-12 13:49:37.122818 | orchestrator | Saturday 12 July 2025 13:47:35 +0000 (0:00:02.022) 0:00:36.758 ********* 2025-07-12 13:49:37.122829 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:37.122840 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:49:37.122851 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:49:37.122862 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:37.122873 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:37.122884 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:49:37.122894 | orchestrator | 2025-07-12 13:49:37.122905 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-07-12 13:49:37.122924 | orchestrator | 2025-07-12 13:49:37.122935 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 13:49:37.122946 | orchestrator | Saturday 12 July 2025 13:48:14 +0000 (0:00:38.939) 0:01:15.698 ********* 2025-07-12 13:49:37.122957 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:49:37.122967 | orchestrator | 2025-07-12 13:49:37.122978 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 13:49:37.122989 | orchestrator | Saturday 12 July 2025 13:48:15 +0000 (0:00:00.539) 0:01:16.237 ********* 2025-07-12 13:49:37.123000 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:49:37.123011 | orchestrator | 2025-07-12 13:49:37.123027 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-07-12 13:49:37.123038 | orchestrator | Saturday 12 July 2025 13:48:16 +0000 (0:00:00.681) 0:01:16.919 ********* 2025-07-12 13:49:37.123049 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.123060 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.123071 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.123082 | orchestrator | 2025-07-12 13:49:37.123093 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-07-12 13:49:37.123104 | orchestrator | Saturday 12 July 2025 13:48:16 +0000 (0:00:00.934) 0:01:17.853 ********* 2025-07-12 13:49:37.123114 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.123125 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.123136 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.123152 | orchestrator | 2025-07-12 13:49:37.123164 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-07-12 13:49:37.123175 | orchestrator | Saturday 12 July 2025 13:48:17 +0000 (0:00:00.522) 0:01:18.376 ********* 2025-07-12 13:49:37.123185 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.123196 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.123207 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.123218 | orchestrator | 2025-07-12 13:49:37.123229 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-07-12 13:49:37.123240 | orchestrator | Saturday 12 July 2025 13:48:17 +0000 (0:00:00.489) 0:01:18.866 ********* 2025-07-12 13:49:37.123251 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.123262 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.123272 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.123283 | orchestrator | 2025-07-12 13:49:37.123294 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-07-12 13:49:37.123305 | orchestrator | Saturday 12 July 2025 13:48:18 +0000 (0:00:00.633) 0:01:19.500 ********* 2025-07-12 13:49:37.123316 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.123327 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.123338 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.123349 | orchestrator | 2025-07-12 13:49:37.123360 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-07-12 13:49:37.123371 | orchestrator | Saturday 12 July 2025 13:48:18 +0000 (0:00:00.321) 0:01:19.821 ********* 2025-07-12 13:49:37.123382 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.123393 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.123404 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.123414 | orchestrator | 2025-07-12 13:49:37.123425 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-07-12 13:49:37.123436 | orchestrator | Saturday 12 July 2025 13:48:19 +0000 (0:00:00.271) 0:01:20.092 ********* 2025-07-12 13:49:37.123447 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.123458 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.123469 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.123480 | orchestrator | 2025-07-12 13:49:37.123491 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-07-12 13:49:37.123502 | orchestrator | Saturday 12 July 2025 13:48:19 +0000 (0:00:00.318) 0:01:20.411 ********* 2025-07-12 13:49:37.123519 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.123530 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.123541 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.123552 | orchestrator | 2025-07-12 13:49:37.123563 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-07-12 13:49:37.123574 | orchestrator | Saturday 12 July 2025 13:48:20 +0000 (0:00:00.513) 0:01:20.924 ********* 2025-07-12 13:49:37.123585 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.123595 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.123606 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.123617 | orchestrator | 2025-07-12 13:49:37.123628 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-07-12 13:49:37.123639 | orchestrator | Saturday 12 July 2025 13:48:20 +0000 (0:00:00.387) 0:01:21.311 ********* 2025-07-12 13:49:37.123650 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.123710 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.123724 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.123735 | orchestrator | 2025-07-12 13:49:37.123746 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-07-12 13:49:37.123757 | orchestrator | Saturday 12 July 2025 13:48:20 +0000 (0:00:00.280) 0:01:21.591 ********* 2025-07-12 13:49:37.123767 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.123778 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.123789 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.123800 | orchestrator | 2025-07-12 13:49:37.123811 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-07-12 13:49:37.123822 | orchestrator | Saturday 12 July 2025 13:48:21 +0000 (0:00:00.316) 0:01:21.908 ********* 2025-07-12 13:49:37.123833 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.123843 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.123854 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.123865 | orchestrator | 2025-07-12 13:49:37.123876 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-07-12 13:49:37.123886 | orchestrator | Saturday 12 July 2025 13:48:21 +0000 (0:00:00.542) 0:01:22.450 ********* 2025-07-12 13:49:37.123897 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.123908 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.123919 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.123930 | orchestrator | 2025-07-12 13:49:37.123940 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-07-12 13:49:37.123951 | orchestrator | Saturday 12 July 2025 13:48:21 +0000 (0:00:00.334) 0:01:22.784 ********* 2025-07-12 13:49:37.123962 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.123973 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.123984 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.123994 | orchestrator | 2025-07-12 13:49:37.124005 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-07-12 13:49:37.124016 | orchestrator | Saturday 12 July 2025 13:48:22 +0000 (0:00:00.341) 0:01:23.125 ********* 2025-07-12 13:49:37.124026 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.124037 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.124048 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.124059 | orchestrator | 2025-07-12 13:49:37.124069 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-07-12 13:49:37.124086 | orchestrator | Saturday 12 July 2025 13:48:22 +0000 (0:00:00.292) 0:01:23.418 ********* 2025-07-12 13:49:37.124097 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.124108 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.124119 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.124129 | orchestrator | 2025-07-12 13:49:37.124140 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-07-12 13:49:37.124151 | orchestrator | Saturday 12 July 2025 13:48:23 +0000 (0:00:00.484) 0:01:23.903 ********* 2025-07-12 13:49:37.124169 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.124180 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.124198 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.124210 | orchestrator | 2025-07-12 13:49:37.124221 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 13:49:37.124231 | orchestrator | Saturday 12 July 2025 13:48:23 +0000 (0:00:00.330) 0:01:24.233 ********* 2025-07-12 13:49:37.124241 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:49:37.124250 | orchestrator | 2025-07-12 13:49:37.124260 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-07-12 13:49:37.124270 | orchestrator | Saturday 12 July 2025 13:48:24 +0000 (0:00:00.666) 0:01:24.900 ********* 2025-07-12 13:49:37.124280 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.124290 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.124299 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.124309 | orchestrator | 2025-07-12 13:49:37.124319 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-07-12 13:49:37.124328 | orchestrator | Saturday 12 July 2025 13:48:24 +0000 (0:00:00.938) 0:01:25.839 ********* 2025-07-12 13:49:37.124338 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.124348 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.124357 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.124367 | orchestrator | 2025-07-12 13:49:37.124376 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-07-12 13:49:37.124386 | orchestrator | Saturday 12 July 2025 13:48:25 +0000 (0:00:00.435) 0:01:26.274 ********* 2025-07-12 13:49:37.124396 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.124405 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.124415 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.124424 | orchestrator | 2025-07-12 13:49:37.124434 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-07-12 13:49:37.124444 | orchestrator | Saturday 12 July 2025 13:48:25 +0000 (0:00:00.347) 0:01:26.622 ********* 2025-07-12 13:49:37.124453 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.124463 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.124472 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.124482 | orchestrator | 2025-07-12 13:49:37.124492 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-07-12 13:49:37.124501 | orchestrator | Saturday 12 July 2025 13:48:26 +0000 (0:00:00.307) 0:01:26.929 ********* 2025-07-12 13:49:37.124511 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.124520 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.124530 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.124540 | orchestrator | 2025-07-12 13:49:37.124549 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-07-12 13:49:37.124559 | orchestrator | Saturday 12 July 2025 13:48:26 +0000 (0:00:00.587) 0:01:27.516 ********* 2025-07-12 13:49:37.124568 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.124578 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.124587 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.124597 | orchestrator | 2025-07-12 13:49:37.124607 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-07-12 13:49:37.124617 | orchestrator | Saturday 12 July 2025 13:48:26 +0000 (0:00:00.345) 0:01:27.862 ********* 2025-07-12 13:49:37.124626 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.124636 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.124645 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.124655 | orchestrator | 2025-07-12 13:49:37.124679 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-07-12 13:49:37.124689 | orchestrator | Saturday 12 July 2025 13:48:27 +0000 (0:00:00.319) 0:01:28.181 ********* 2025-07-12 13:49:37.124699 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.124714 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.124724 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.124733 | orchestrator | 2025-07-12 13:49:37.124743 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 13:49:37.124753 | orchestrator | Saturday 12 July 2025 13:48:27 +0000 (0:00:00.530) 0:01:28.712 ********* 2025-07-12 13:49:37.124763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124881 | orchestrator | 2025-07-12 13:49:37.124891 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 13:49:37.124901 | orchestrator | Saturday 12 July 2025 13:48:29 +0000 (0:00:01.605) 0:01:30.318 ********* 2025-07-12 13:49:37.124912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.124994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125019 | orchestrator | 2025-07-12 13:49:37.125029 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 13:49:37.125039 | orchestrator | Saturday 12 July 2025 13:48:33 +0000 (0:00:03.920) 0:01:34.238 ********* 2025-07-12 13:49:37.125050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125151 | orchestrator | 2025-07-12 13:49:37.125167 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:37.125177 | orchestrator | Saturday 12 July 2025 13:48:35 +0000 (0:00:01.995) 0:01:36.233 ********* 2025-07-12 13:49:37.125187 | orchestrator | 2025-07-12 13:49:37.125196 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:37.125206 | orchestrator | Saturday 12 July 2025 13:48:35 +0000 (0:00:00.069) 0:01:36.303 ********* 2025-07-12 13:49:37.125216 | orchestrator | 2025-07-12 13:49:37.125225 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:37.125235 | orchestrator | Saturday 12 July 2025 13:48:35 +0000 (0:00:00.065) 0:01:36.368 ********* 2025-07-12 13:49:37.125244 | orchestrator | 2025-07-12 13:49:37.125254 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 13:49:37.125264 | orchestrator | Saturday 12 July 2025 13:48:35 +0000 (0:00:00.079) 0:01:36.448 ********* 2025-07-12 13:49:37.125273 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:37.125283 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:37.125293 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:37.125302 | orchestrator | 2025-07-12 13:49:37.125312 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 13:49:37.125322 | orchestrator | Saturday 12 July 2025 13:48:38 +0000 (0:00:02.501) 0:01:38.949 ********* 2025-07-12 13:49:37.125331 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:37.125341 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:37.125350 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:37.125360 | orchestrator | 2025-07-12 13:49:37.125370 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 13:49:37.125379 | orchestrator | Saturday 12 July 2025 13:48:46 +0000 (0:00:08.246) 0:01:47.195 ********* 2025-07-12 13:49:37.125389 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:37.125399 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:37.125408 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:37.125418 | orchestrator | 2025-07-12 13:49:37.125428 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 13:49:37.125437 | orchestrator | Saturday 12 July 2025 13:48:53 +0000 (0:00:07.527) 0:01:54.723 ********* 2025-07-12 13:49:37.125447 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.125456 | orchestrator | 2025-07-12 13:49:37.125466 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 13:49:37.125476 | orchestrator | Saturday 12 July 2025 13:48:53 +0000 (0:00:00.105) 0:01:54.829 ********* 2025-07-12 13:49:37.125485 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.125495 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.125505 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.125515 | orchestrator | 2025-07-12 13:49:37.125525 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 13:49:37.125535 | orchestrator | Saturday 12 July 2025 13:48:54 +0000 (0:00:00.806) 0:01:55.635 ********* 2025-07-12 13:49:37.125544 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.125554 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.125564 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:37.125573 | orchestrator | 2025-07-12 13:49:37.125583 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 13:49:37.125592 | orchestrator | Saturday 12 July 2025 13:48:55 +0000 (0:00:00.943) 0:01:56.579 ********* 2025-07-12 13:49:37.125602 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.125612 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.125621 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.125631 | orchestrator | 2025-07-12 13:49:37.125644 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 13:49:37.125655 | orchestrator | Saturday 12 July 2025 13:48:56 +0000 (0:00:00.797) 0:01:57.376 ********* 2025-07-12 13:49:37.125711 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.125721 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.125737 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:37.125747 | orchestrator | 2025-07-12 13:49:37.125756 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 13:49:37.125766 | orchestrator | Saturday 12 July 2025 13:48:57 +0000 (0:00:00.621) 0:01:57.998 ********* 2025-07-12 13:49:37.125776 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.125785 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.125801 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.125811 | orchestrator | 2025-07-12 13:49:37.125821 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 13:49:37.125831 | orchestrator | Saturday 12 July 2025 13:48:57 +0000 (0:00:00.704) 0:01:58.702 ********* 2025-07-12 13:49:37.125840 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.125850 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.125859 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.125869 | orchestrator | 2025-07-12 13:49:37.125879 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-07-12 13:49:37.125888 | orchestrator | Saturday 12 July 2025 13:48:58 +0000 (0:00:01.185) 0:01:59.887 ********* 2025-07-12 13:49:37.125898 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.125908 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.125917 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.125926 | orchestrator | 2025-07-12 13:49:37.125936 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 13:49:37.125944 | orchestrator | Saturday 12 July 2025 13:48:59 +0000 (0:00:00.300) 0:02:00.188 ********* 2025-07-12 13:49:37.125953 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125961 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125969 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125977 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125986 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.125994 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126002 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126041 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126057 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126065 | orchestrator | 2025-07-12 13:49:37.126074 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 13:49:37.126082 | orchestrator | Saturday 12 July 2025 13:49:00 +0000 (0:00:01.434) 0:02:01.623 ********* 2025-07-12 13:49:37.126090 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126099 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126107 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126115 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126140 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126175 | orchestrator | 2025-07-12 13:49:37.126183 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 13:49:37.126191 | orchestrator | Saturday 12 July 2025 13:49:06 +0000 (0:00:05.822) 0:02:07.445 ********* 2025-07-12 13:49:37.126204 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126213 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126221 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126238 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126268 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 13:49:37.126285 | orchestrator | 2025-07-12 13:49:37.126293 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:37.126301 | orchestrator | Saturday 12 July 2025 13:49:09 +0000 (0:00:03.360) 0:02:10.805 ********* 2025-07-12 13:49:37.126309 | orchestrator | 2025-07-12 13:49:37.126317 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:37.126329 | orchestrator | Saturday 12 July 2025 13:49:09 +0000 (0:00:00.067) 0:02:10.873 ********* 2025-07-12 13:49:37.126337 | orchestrator | 2025-07-12 13:49:37.126345 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 13:49:37.126353 | orchestrator | Saturday 12 July 2025 13:49:10 +0000 (0:00:00.065) 0:02:10.939 ********* 2025-07-12 13:49:37.126361 | orchestrator | 2025-07-12 13:49:37.126368 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 13:49:37.126376 | orchestrator | Saturday 12 July 2025 13:49:10 +0000 (0:00:00.067) 0:02:11.006 ********* 2025-07-12 13:49:37.126384 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:37.126393 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:37.126401 | orchestrator | 2025-07-12 13:49:37.126413 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 13:49:37.126421 | orchestrator | Saturday 12 July 2025 13:49:16 +0000 (0:00:06.186) 0:02:17.192 ********* 2025-07-12 13:49:37.126429 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:37.126437 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:37.126445 | orchestrator | 2025-07-12 13:49:37.126453 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 13:49:37.126461 | orchestrator | Saturday 12 July 2025 13:49:22 +0000 (0:00:06.196) 0:02:23.389 ********* 2025-07-12 13:49:37.126469 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:49:37.126477 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:49:37.126485 | orchestrator | 2025-07-12 13:49:37.126492 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 13:49:37.126500 | orchestrator | Saturday 12 July 2025 13:49:28 +0000 (0:00:06.279) 0:02:29.669 ********* 2025-07-12 13:49:37.126508 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:49:37.126516 | orchestrator | 2025-07-12 13:49:37.126524 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 13:49:37.126532 | orchestrator | Saturday 12 July 2025 13:49:28 +0000 (0:00:00.129) 0:02:29.798 ********* 2025-07-12 13:49:37.126540 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.126548 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.126556 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.126564 | orchestrator | 2025-07-12 13:49:37.126572 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 13:49:37.126580 | orchestrator | Saturday 12 July 2025 13:49:29 +0000 (0:00:01.007) 0:02:30.805 ********* 2025-07-12 13:49:37.126588 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.126596 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.126608 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:37.126616 | orchestrator | 2025-07-12 13:49:37.126625 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 13:49:37.126633 | orchestrator | Saturday 12 July 2025 13:49:30 +0000 (0:00:00.593) 0:02:31.399 ********* 2025-07-12 13:49:37.126641 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.126649 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.126657 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.126680 | orchestrator | 2025-07-12 13:49:37.126689 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 13:49:37.126697 | orchestrator | Saturday 12 July 2025 13:49:31 +0000 (0:00:00.804) 0:02:32.204 ********* 2025-07-12 13:49:37.126705 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:49:37.126712 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:49:37.126720 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:49:37.126728 | orchestrator | 2025-07-12 13:49:37.126736 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 13:49:37.126744 | orchestrator | Saturday 12 July 2025 13:49:31 +0000 (0:00:00.649) 0:02:32.854 ********* 2025-07-12 13:49:37.126752 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.126760 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.126768 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.126776 | orchestrator | 2025-07-12 13:49:37.126784 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 13:49:37.126792 | orchestrator | Saturday 12 July 2025 13:49:32 +0000 (0:00:00.985) 0:02:33.839 ********* 2025-07-12 13:49:37.126800 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:49:37.126808 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:49:37.126815 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:49:37.126823 | orchestrator | 2025-07-12 13:49:37.126831 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:49:37.126839 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 13:49:37.126848 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 13:49:37.126856 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 13:49:37.126864 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:49:37.126872 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:49:37.126880 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:49:37.126887 | orchestrator | 2025-07-12 13:49:37.126896 | orchestrator | 2025-07-12 13:49:37.126903 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:49:37.126911 | orchestrator | Saturday 12 July 2025 13:49:33 +0000 (0:00:00.841) 0:02:34.681 ********* 2025-07-12 13:49:37.126919 | orchestrator | =============================================================================== 2025-07-12 13:49:37.126927 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 38.94s 2025-07-12 13:49:37.126941 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.89s 2025-07-12 13:49:37.126950 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.44s 2025-07-12 13:49:37.126957 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.81s 2025-07-12 13:49:37.126965 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.69s 2025-07-12 13:49:37.126973 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.82s 2025-07-12 13:49:37.126989 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.92s 2025-07-12 13:49:37.127001 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.36s 2025-07-12 13:49:37.127009 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.68s 2025-07-12 13:49:37.127017 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.07s 2025-07-12 13:49:37.127025 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.02s 2025-07-12 13:49:37.127033 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.00s 2025-07-12 13:49:37.127041 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.75s 2025-07-12 13:49:37.127049 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.64s 2025-07-12 13:49:37.127057 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.61s 2025-07-12 13:49:37.127065 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.46s 2025-07-12 13:49:37.127073 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2025-07-12 13:49:37.127081 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.39s 2025-07-12 13:49:37.127089 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.33s 2025-07-12 13:49:37.127097 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.19s 2025-07-12 13:49:37.127105 | orchestrator | 2025-07-12 13:49:37 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:37.127113 | orchestrator | 2025-07-12 13:49:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:40.175726 | orchestrator | 2025-07-12 13:49:40 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:40.177730 | orchestrator | 2025-07-12 13:49:40 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:40.177766 | orchestrator | 2025-07-12 13:49:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:43.223927 | orchestrator | 2025-07-12 13:49:43 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:43.226953 | orchestrator | 2025-07-12 13:49:43 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:43.226985 | orchestrator | 2025-07-12 13:49:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:46.271131 | orchestrator | 2025-07-12 13:49:46 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:46.273205 | orchestrator | 2025-07-12 13:49:46 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:46.273247 | orchestrator | 2025-07-12 13:49:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:49.323898 | orchestrator | 2025-07-12 13:49:49 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:49.326169 | orchestrator | 2025-07-12 13:49:49 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:49.326209 | orchestrator | 2025-07-12 13:49:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:52.372429 | orchestrator | 2025-07-12 13:49:52 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:52.373778 | orchestrator | 2025-07-12 13:49:52 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:52.374202 | orchestrator | 2025-07-12 13:49:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:55.429458 | orchestrator | 2025-07-12 13:49:55 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:55.429587 | orchestrator | 2025-07-12 13:49:55 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:55.429603 | orchestrator | 2025-07-12 13:49:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:49:58.473405 | orchestrator | 2025-07-12 13:49:58 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:49:58.474130 | orchestrator | 2025-07-12 13:49:58 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:49:58.474705 | orchestrator | 2025-07-12 13:49:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:01.522275 | orchestrator | 2025-07-12 13:50:01 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:01.523427 | orchestrator | 2025-07-12 13:50:01 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:01.523455 | orchestrator | 2025-07-12 13:50:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:04.562708 | orchestrator | 2025-07-12 13:50:04 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:04.566982 | orchestrator | 2025-07-12 13:50:04 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:04.567018 | orchestrator | 2025-07-12 13:50:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:07.623466 | orchestrator | 2025-07-12 13:50:07 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:07.623570 | orchestrator | 2025-07-12 13:50:07 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:07.623586 | orchestrator | 2025-07-12 13:50:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:10.667024 | orchestrator | 2025-07-12 13:50:10 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:10.667132 | orchestrator | 2025-07-12 13:50:10 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:10.667149 | orchestrator | 2025-07-12 13:50:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:13.702090 | orchestrator | 2025-07-12 13:50:13 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:13.702318 | orchestrator | 2025-07-12 13:50:13 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:13.702404 | orchestrator | 2025-07-12 13:50:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:16.737029 | orchestrator | 2025-07-12 13:50:16 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:16.737414 | orchestrator | 2025-07-12 13:50:16 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:16.737442 | orchestrator | 2025-07-12 13:50:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:19.792294 | orchestrator | 2025-07-12 13:50:19 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:19.794540 | orchestrator | 2025-07-12 13:50:19 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:19.794580 | orchestrator | 2025-07-12 13:50:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:22.849173 | orchestrator | 2025-07-12 13:50:22 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:22.850992 | orchestrator | 2025-07-12 13:50:22 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:22.851493 | orchestrator | 2025-07-12 13:50:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:25.896440 | orchestrator | 2025-07-12 13:50:25 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:25.899159 | orchestrator | 2025-07-12 13:50:25 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:25.899434 | orchestrator | 2025-07-12 13:50:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:28.956896 | orchestrator | 2025-07-12 13:50:28 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:28.958262 | orchestrator | 2025-07-12 13:50:28 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:28.958297 | orchestrator | 2025-07-12 13:50:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:32.011377 | orchestrator | 2025-07-12 13:50:32 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:32.019500 | orchestrator | 2025-07-12 13:50:32 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:32.019804 | orchestrator | 2025-07-12 13:50:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:35.069899 | orchestrator | 2025-07-12 13:50:35 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:35.070542 | orchestrator | 2025-07-12 13:50:35 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:35.070703 | orchestrator | 2025-07-12 13:50:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:38.116084 | orchestrator | 2025-07-12 13:50:38 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:38.116183 | orchestrator | 2025-07-12 13:50:38 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:38.116196 | orchestrator | 2025-07-12 13:50:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:41.163884 | orchestrator | 2025-07-12 13:50:41 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:41.165869 | orchestrator | 2025-07-12 13:50:41 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:41.165896 | orchestrator | 2025-07-12 13:50:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:44.217401 | orchestrator | 2025-07-12 13:50:44 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:44.219419 | orchestrator | 2025-07-12 13:50:44 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:44.219511 | orchestrator | 2025-07-12 13:50:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:47.270448 | orchestrator | 2025-07-12 13:50:47 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:47.270550 | orchestrator | 2025-07-12 13:50:47 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:47.270558 | orchestrator | 2025-07-12 13:50:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:50.316375 | orchestrator | 2025-07-12 13:50:50 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:50.318092 | orchestrator | 2025-07-12 13:50:50 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:50.318126 | orchestrator | 2025-07-12 13:50:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:53.366839 | orchestrator | 2025-07-12 13:50:53 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:53.369056 | orchestrator | 2025-07-12 13:50:53 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:53.369116 | orchestrator | 2025-07-12 13:50:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:56.412910 | orchestrator | 2025-07-12 13:50:56 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:56.414557 | orchestrator | 2025-07-12 13:50:56 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:56.414649 | orchestrator | 2025-07-12 13:50:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:50:59.461004 | orchestrator | 2025-07-12 13:50:59 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:50:59.462640 | orchestrator | 2025-07-12 13:50:59 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:50:59.462670 | orchestrator | 2025-07-12 13:50:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:02.510482 | orchestrator | 2025-07-12 13:51:02 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:02.510923 | orchestrator | 2025-07-12 13:51:02 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:02.510953 | orchestrator | 2025-07-12 13:51:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:05.549935 | orchestrator | 2025-07-12 13:51:05 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:05.551149 | orchestrator | 2025-07-12 13:51:05 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:05.551271 | orchestrator | 2025-07-12 13:51:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:08.594731 | orchestrator | 2025-07-12 13:51:08 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:08.596849 | orchestrator | 2025-07-12 13:51:08 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:08.596887 | orchestrator | 2025-07-12 13:51:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:11.640560 | orchestrator | 2025-07-12 13:51:11 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:11.643687 | orchestrator | 2025-07-12 13:51:11 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:11.643726 | orchestrator | 2025-07-12 13:51:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:14.703041 | orchestrator | 2025-07-12 13:51:14 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:14.706556 | orchestrator | 2025-07-12 13:51:14 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:14.706685 | orchestrator | 2025-07-12 13:51:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:17.754914 | orchestrator | 2025-07-12 13:51:17 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:17.756400 | orchestrator | 2025-07-12 13:51:17 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:17.756748 | orchestrator | 2025-07-12 13:51:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:20.807195 | orchestrator | 2025-07-12 13:51:20 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:20.807854 | orchestrator | 2025-07-12 13:51:20 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:20.807894 | orchestrator | 2025-07-12 13:51:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:23.854882 | orchestrator | 2025-07-12 13:51:23 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:23.855891 | orchestrator | 2025-07-12 13:51:23 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:23.856081 | orchestrator | 2025-07-12 13:51:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:26.903937 | orchestrator | 2025-07-12 13:51:26 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:26.904030 | orchestrator | 2025-07-12 13:51:26 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:26.904045 | orchestrator | 2025-07-12 13:51:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:29.949233 | orchestrator | 2025-07-12 13:51:29 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:29.951440 | orchestrator | 2025-07-12 13:51:29 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:29.951476 | orchestrator | 2025-07-12 13:51:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:33.007615 | orchestrator | 2025-07-12 13:51:33 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:33.009010 | orchestrator | 2025-07-12 13:51:33 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:33.009267 | orchestrator | 2025-07-12 13:51:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:36.052501 | orchestrator | 2025-07-12 13:51:36 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:36.053204 | orchestrator | 2025-07-12 13:51:36 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:36.053239 | orchestrator | 2025-07-12 13:51:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:39.101679 | orchestrator | 2025-07-12 13:51:39 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:39.101875 | orchestrator | 2025-07-12 13:51:39 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:39.102288 | orchestrator | 2025-07-12 13:51:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:42.148290 | orchestrator | 2025-07-12 13:51:42 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:42.150170 | orchestrator | 2025-07-12 13:51:42 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:42.150220 | orchestrator | 2025-07-12 13:51:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:45.199954 | orchestrator | 2025-07-12 13:51:45 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:45.201446 | orchestrator | 2025-07-12 13:51:45 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:45.201476 | orchestrator | 2025-07-12 13:51:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:48.245013 | orchestrator | 2025-07-12 13:51:48 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:48.247263 | orchestrator | 2025-07-12 13:51:48 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:48.247410 | orchestrator | 2025-07-12 13:51:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:51.292071 | orchestrator | 2025-07-12 13:51:51 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:51.292224 | orchestrator | 2025-07-12 13:51:51 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:51.292261 | orchestrator | 2025-07-12 13:51:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:54.330966 | orchestrator | 2025-07-12 13:51:54 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:54.332416 | orchestrator | 2025-07-12 13:51:54 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:54.332448 | orchestrator | 2025-07-12 13:51:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:51:57.369523 | orchestrator | 2025-07-12 13:51:57 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:51:57.373917 | orchestrator | 2025-07-12 13:51:57 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:51:57.373977 | orchestrator | 2025-07-12 13:51:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:00.428937 | orchestrator | 2025-07-12 13:52:00 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:00.430322 | orchestrator | 2025-07-12 13:52:00 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:52:00.430480 | orchestrator | 2025-07-12 13:52:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:03.477870 | orchestrator | 2025-07-12 13:52:03 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:03.479947 | orchestrator | 2025-07-12 13:52:03 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state STARTED 2025-07-12 13:52:03.479978 | orchestrator | 2025-07-12 13:52:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:06.527064 | orchestrator | 2025-07-12 13:52:06 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:06.528377 | orchestrator | 2025-07-12 13:52:06 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:06.529678 | orchestrator | 2025-07-12 13:52:06 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:06.538101 | orchestrator | 2025-07-12 13:52:06 | INFO  | Task 92289711-15af-4893-9510-11fa51f94b20 is in state SUCCESS 2025-07-12 13:52:06.540085 | orchestrator | 2025-07-12 13:52:06.540106 | orchestrator | 2025-07-12 13:52:06.540115 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:52:06.540123 | orchestrator | 2025-07-12 13:52:06.540131 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:52:06.540138 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.344) 0:00:00.344 ********* 2025-07-12 13:52:06.540145 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.540152 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.540159 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.540165 | orchestrator | 2025-07-12 13:52:06.540172 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:52:06.540178 | orchestrator | Saturday 12 July 2025 13:45:44 +0000 (0:00:00.285) 0:00:00.630 ********* 2025-07-12 13:52:06.540186 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-07-12 13:52:06.540192 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-07-12 13:52:06.540199 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-07-12 13:52:06.540205 | orchestrator | 2025-07-12 13:52:06.540212 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-07-12 13:52:06.540218 | orchestrator | 2025-07-12 13:52:06.540250 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 13:52:06.540258 | orchestrator | Saturday 12 July 2025 13:45:45 +0000 (0:00:00.669) 0:00:01.300 ********* 2025-07-12 13:52:06.540265 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.540271 | orchestrator | 2025-07-12 13:52:06.540278 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-07-12 13:52:06.540321 | orchestrator | Saturday 12 July 2025 13:45:46 +0000 (0:00:00.906) 0:00:02.206 ********* 2025-07-12 13:52:06.540329 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.540335 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.540342 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.540348 | orchestrator | 2025-07-12 13:52:06.540354 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 13:52:06.540361 | orchestrator | Saturday 12 July 2025 13:45:47 +0000 (0:00:00.700) 0:00:02.907 ********* 2025-07-12 13:52:06.540367 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.540373 | orchestrator | 2025-07-12 13:52:06.540401 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-07-12 13:52:06.540409 | orchestrator | Saturday 12 July 2025 13:45:48 +0000 (0:00:00.942) 0:00:03.850 ********* 2025-07-12 13:52:06.540415 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.540421 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.540427 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.540433 | orchestrator | 2025-07-12 13:52:06.540439 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-07-12 13:52:06.540446 | orchestrator | Saturday 12 July 2025 13:45:49 +0000 (0:00:00.839) 0:00:04.689 ********* 2025-07-12 13:52:06.540452 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:52:06.540458 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:52:06.540471 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:52:06.540478 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:52:06.540484 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 13:52:06.540491 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 13:52:06.540514 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:52:06.540521 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 13:52:06.540528 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 13:52:06.540556 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 13:52:06.540562 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 13:52:06.540569 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 13:52:06.540597 | orchestrator | 2025-07-12 13:52:06.540604 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 13:52:06.540610 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:03.522) 0:00:08.212 ********* 2025-07-12 13:52:06.540617 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 13:52:06.540623 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 13:52:06.540630 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 13:52:06.540636 | orchestrator | 2025-07-12 13:52:06.540658 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 13:52:06.540666 | orchestrator | Saturday 12 July 2025 13:45:53 +0000 (0:00:01.068) 0:00:09.281 ********* 2025-07-12 13:52:06.540672 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 13:52:06.540679 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 13:52:06.540685 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 13:52:06.540691 | orchestrator | 2025-07-12 13:52:06.540697 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 13:52:06.540703 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:01.862) 0:00:11.143 ********* 2025-07-12 13:52:06.540716 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-07-12 13:52:06.540722 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.540736 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-07-12 13:52:06.540743 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.540749 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-07-12 13:52:06.540755 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.540761 | orchestrator | 2025-07-12 13:52:06.540767 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-07-12 13:52:06.540773 | orchestrator | Saturday 12 July 2025 13:45:56 +0000 (0:00:01.169) 0:00:12.312 ********* 2025-07-12 13:52:06.540782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.540792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.540803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.540810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.540816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.540827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.540839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.540846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.540853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.540859 | orchestrator | 2025-07-12 13:52:06.540866 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-07-12 13:52:06.540872 | orchestrator | Saturday 12 July 2025 13:45:58 +0000 (0:00:02.079) 0:00:14.392 ********* 2025-07-12 13:52:06.540903 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.540911 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.540918 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.540924 | orchestrator | 2025-07-12 13:52:06.540930 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-07-12 13:52:06.540937 | orchestrator | Saturday 12 July 2025 13:46:00 +0000 (0:00:01.773) 0:00:16.166 ********* 2025-07-12 13:52:06.540943 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-07-12 13:52:06.540949 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-07-12 13:52:06.540955 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-07-12 13:52:06.540962 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-07-12 13:52:06.540971 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-07-12 13:52:06.540978 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-07-12 13:52:06.540984 | orchestrator | 2025-07-12 13:52:06.540990 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-07-12 13:52:06.540997 | orchestrator | Saturday 12 July 2025 13:46:03 +0000 (0:00:03.367) 0:00:19.534 ********* 2025-07-12 13:52:06.541003 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.541009 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.541015 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.541022 | orchestrator | 2025-07-12 13:52:06.541028 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-07-12 13:52:06.541034 | orchestrator | Saturday 12 July 2025 13:46:06 +0000 (0:00:03.047) 0:00:22.581 ********* 2025-07-12 13:52:06.541040 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.541056 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.541062 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.541069 | orchestrator | 2025-07-12 13:52:06.541075 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-07-12 13:52:06.541111 | orchestrator | Saturday 12 July 2025 13:46:09 +0000 (0:00:02.510) 0:00:25.096 ********* 2025-07-12 13:52:06.541119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.541139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.541146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.541154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:52:06.541160 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.541167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.541177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.541188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.541195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:52:06.541202 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.541213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.541220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.541226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.541236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:52:06.541249 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.541256 | orchestrator | 2025-07-12 13:52:06.541262 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-07-12 13:52:06.541268 | orchestrator | Saturday 12 July 2025 13:46:11 +0000 (0:00:01.702) 0:00:26.799 ********* 2025-07-12 13:52:06.541275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.541313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:52:06.541334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.541341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:52:06.541351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.541364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19', '__omit_place_holder__36f49b812020c0f02099d2fe9966dd08bd2c1c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 13:52:06.541374 | orchestrator | 2025-07-12 13:52:06.541436 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-07-12 13:52:06.541443 | orchestrator | Saturday 12 July 2025 13:46:14 +0000 (0:00:03.399) 0:00:30.199 ********* 2025-07-12 13:52:06.541452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.541501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.541511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.541518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.541524 | orchestrator | 2025-07-12 13:52:06.541551 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-07-12 13:52:06.541558 | orchestrator | Saturday 12 July 2025 13:46:18 +0000 (0:00:04.151) 0:00:34.351 ********* 2025-07-12 13:52:06.541565 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 13:52:06.541571 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 13:52:06.541578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 13:52:06.541584 | orchestrator | 2025-07-12 13:52:06.541590 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-07-12 13:52:06.541596 | orchestrator | Saturday 12 July 2025 13:46:21 +0000 (0:00:03.184) 0:00:37.535 ********* 2025-07-12 13:52:06.541602 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 13:52:06.541609 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 13:52:06.541615 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 13:52:06.541621 | orchestrator | 2025-07-12 13:52:06.542478 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-07-12 13:52:06.542506 | orchestrator | Saturday 12 July 2025 13:46:27 +0000 (0:00:05.525) 0:00:43.061 ********* 2025-07-12 13:52:06.542512 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.542519 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.542525 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.542548 | orchestrator | 2025-07-12 13:52:06.542555 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-07-12 13:52:06.542561 | orchestrator | Saturday 12 July 2025 13:46:27 +0000 (0:00:00.480) 0:00:43.541 ********* 2025-07-12 13:52:06.542568 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 13:52:06.542575 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 13:52:06.542591 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 13:52:06.542597 | orchestrator | 2025-07-12 13:52:06.542603 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-07-12 13:52:06.542610 | orchestrator | Saturday 12 July 2025 13:46:30 +0000 (0:00:02.968) 0:00:46.510 ********* 2025-07-12 13:52:06.542616 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 13:52:06.542623 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 13:52:06.542630 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 13:52:06.542636 | orchestrator | 2025-07-12 13:52:06.542659 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-07-12 13:52:06.542666 | orchestrator | Saturday 12 July 2025 13:46:32 +0000 (0:00:02.004) 0:00:48.515 ********* 2025-07-12 13:52:06.542672 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-07-12 13:52:06.542679 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-07-12 13:52:06.542685 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-07-12 13:52:06.542691 | orchestrator | 2025-07-12 13:52:06.542698 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-07-12 13:52:06.542704 | orchestrator | Saturday 12 July 2025 13:46:34 +0000 (0:00:01.812) 0:00:50.328 ********* 2025-07-12 13:52:06.542710 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-07-12 13:52:06.542716 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-07-12 13:52:06.542723 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-07-12 13:52:06.542729 | orchestrator | 2025-07-12 13:52:06.542735 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 13:52:06.542788 | orchestrator | Saturday 12 July 2025 13:46:36 +0000 (0:00:02.034) 0:00:52.362 ********* 2025-07-12 13:52:06.542797 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.542803 | orchestrator | 2025-07-12 13:52:06.542809 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-07-12 13:52:06.542815 | orchestrator | Saturday 12 July 2025 13:46:37 +0000 (0:00:00.977) 0:00:53.340 ********* 2025-07-12 13:52:06.542823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.542830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.542845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.542857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.542864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.542874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.542918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.542925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.542932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.542939 | orchestrator | 2025-07-12 13:52:06.542945 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-07-12 13:52:06.542956 | orchestrator | Saturday 12 July 2025 13:46:40 +0000 (0:00:03.272) 0:00:56.612 ********* 2025-07-12 13:52:06.542969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.542976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.542982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.542989 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.542999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543019 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.543026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543055 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.543061 | orchestrator | 2025-07-12 13:52:06.543068 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-07-12 13:52:06.543074 | orchestrator | Saturday 12 July 2025 13:46:41 +0000 (0:00:00.895) 0:00:57.508 ********* 2025-07-12 13:52:06.543081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543110 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.543122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543149 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.543157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543182 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.543189 | orchestrator | 2025-07-12 13:52:06.543196 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 13:52:06.543204 | orchestrator | Saturday 12 July 2025 13:46:43 +0000 (0:00:01.424) 0:00:58.932 ********* 2025-07-12 13:52:06.543236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543264 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.543271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543330 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.543337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543368 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.543375 | orchestrator | 2025-07-12 13:52:06.543383 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 13:52:06.543390 | orchestrator | Saturday 12 July 2025 13:46:43 +0000 (0:00:00.636) 0:00:59.569 ********* 2025-07-12 13:52:06.543397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543420 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.543454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543479 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.543490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543601 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.543607 | orchestrator | 2025-07-12 13:52:06.543614 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 13:52:06.543620 | orchestrator | Saturday 12 July 2025 13:46:44 +0000 (0:00:00.590) 0:01:00.159 ********* 2025-07-12 13:52:06.543630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543655 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.543665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543685 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.543692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543719 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.543725 | orchestrator | 2025-07-12 13:52:06.543732 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-07-12 13:52:06.543738 | orchestrator | Saturday 12 July 2025 13:46:45 +0000 (0:00:01.256) 0:01:01.416 ********* 2025-07-12 13:52:06.543745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543768 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.543775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543803 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.543809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543833 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.543839 | orchestrator | 2025-07-12 13:52:06.543846 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-07-12 13:52:06.543852 | orchestrator | Saturday 12 July 2025 13:46:46 +0000 (0:00:00.693) 0:01:02.110 ********* 2025-07-12 13:52:06.543859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543889 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.543895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.543914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.543945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543952 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.543959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.543965 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.543972 | orchestrator | 2025-07-12 13:52:06.543978 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-07-12 13:52:06.543988 | orchestrator | Saturday 12 July 2025 13:46:48 +0000 (0:00:01.690) 0:01:03.800 ********* 2025-07-12 13:52:06.543994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.544001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.544008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.544014 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.544025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.544036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.544043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.544049 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.544059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 13:52:06.544066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 13:52:06.544072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 13:52:06.544079 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.544085 | orchestrator | 2025-07-12 13:52:06.544091 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-07-12 13:52:06.544098 | orchestrator | Saturday 12 July 2025 13:46:49 +0000 (0:00:01.473) 0:01:05.274 ********* 2025-07-12 13:52:06.544104 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 13:52:06.544111 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 13:52:06.544120 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 13:52:06.544127 | orchestrator | 2025-07-12 13:52:06.544133 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-07-12 13:52:06.544144 | orchestrator | Saturday 12 July 2025 13:46:51 +0000 (0:00:01.529) 0:01:06.803 ********* 2025-07-12 13:52:06.544151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 13:52:06.544157 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 13:52:06.544163 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 13:52:06.544170 | orchestrator | 2025-07-12 13:52:06.544176 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-07-12 13:52:06.544182 | orchestrator | Saturday 12 July 2025 13:46:52 +0000 (0:00:01.484) 0:01:08.287 ********* 2025-07-12 13:52:06.544188 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 13:52:06.544195 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 13:52:06.544201 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 13:52:06.544207 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 13:52:06.544214 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.544220 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 13:52:06.544226 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.544233 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 13:52:06.544239 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.544245 | orchestrator | 2025-07-12 13:52:06.544251 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-07-12 13:52:06.544258 | orchestrator | Saturday 12 July 2025 13:46:53 +0000 (0:00:01.106) 0:01:09.394 ********* 2025-07-12 13:52:06.544267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.544274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.544281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 13:52:06.544291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.544302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.544308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 13:52:06.544315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.544325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.544331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 13:52:06.544338 | orchestrator | 2025-07-12 13:52:06.544344 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-07-12 13:52:06.544351 | orchestrator | Saturday 12 July 2025 13:46:56 +0000 (0:00:02.884) 0:01:12.279 ********* 2025-07-12 13:52:06.544357 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.544363 | orchestrator | 2025-07-12 13:52:06.544369 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-07-12 13:52:06.544376 | orchestrator | Saturday 12 July 2025 13:46:57 +0000 (0:00:00.950) 0:01:13.230 ********* 2025-07-12 13:52:06.544387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 13:52:06.544398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.544405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.544412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.544422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 13:52:06.544428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.544442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.544892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 13:52:06.544913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.544920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.544930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.544936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.544943 | orchestrator | 2025-07-12 13:52:06.544956 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-07-12 13:52:06.544963 | orchestrator | Saturday 12 July 2025 13:47:01 +0000 (0:00:04.232) 0:01:17.462 ********* 2025-07-12 13:52:06.544970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 13:52:06.544982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.544989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.544995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545002 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.545011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 13:52:06.545018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 13:52:06.545028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.545039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.545052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545065 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.545074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545084 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.545090 | orchestrator | 2025-07-12 13:52:06.545096 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-07-12 13:52:06.545103 | orchestrator | Saturday 12 July 2025 13:47:02 +0000 (0:00:00.765) 0:01:18.227 ********* 2025-07-12 13:52:06.545109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:52:06.545138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:52:06.545146 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.545153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:52:06.545159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:52:06.545218 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.545225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:52:06.545231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 13:52:06.545257 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.545265 | orchestrator | 2025-07-12 13:52:06.545276 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-07-12 13:52:06.545283 | orchestrator | Saturday 12 July 2025 13:47:03 +0000 (0:00:01.216) 0:01:19.443 ********* 2025-07-12 13:52:06.545289 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.545295 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.545301 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.545322 | orchestrator | 2025-07-12 13:52:06.545329 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-07-12 13:52:06.545335 | orchestrator | Saturday 12 July 2025 13:47:05 +0000 (0:00:01.492) 0:01:20.936 ********* 2025-07-12 13:52:06.545341 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.545348 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.545354 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.545360 | orchestrator | 2025-07-12 13:52:06.545366 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-07-12 13:52:06.545372 | orchestrator | Saturday 12 July 2025 13:47:07 +0000 (0:00:02.103) 0:01:23.039 ********* 2025-07-12 13:52:06.545379 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.545385 | orchestrator | 2025-07-12 13:52:06.545391 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-07-12 13:52:06.545398 | orchestrator | Saturday 12 July 2025 13:47:08 +0000 (0:00:00.753) 0:01:23.793 ********* 2025-07-12 13:52:06.545405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.545419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.545444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.545470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545483 | orchestrator | 2025-07-12 13:52:06.545490 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-07-12 13:52:06.545496 | orchestrator | Saturday 12 July 2025 13:47:12 +0000 (0:00:04.361) 0:01:28.155 ********* 2025-07-12 13:52:06.545506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.545514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545576 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.545587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.545595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.545622 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.545629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.545649 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.545656 | orchestrator | 2025-07-12 13:52:06.545676 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-07-12 13:52:06.545703 | orchestrator | Saturday 12 July 2025 13:47:13 +0000 (0:00:00.585) 0:01:28.740 ********* 2025-07-12 13:52:06.545715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:52:06.545723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:52:06.545731 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.545738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:52:06.545746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:52:06.545753 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.545760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:52:06.545767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 13:52:06.545775 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.545782 | orchestrator | 2025-07-12 13:52:06.545789 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-07-12 13:52:06.545796 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:01.283) 0:01:30.024 ********* 2025-07-12 13:52:06.545804 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.545811 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.545818 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.545825 | orchestrator | 2025-07-12 13:52:06.545832 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-07-12 13:52:06.545840 | orchestrator | Saturday 12 July 2025 13:47:16 +0000 (0:00:02.552) 0:01:32.576 ********* 2025-07-12 13:52:06.545847 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.545854 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.545861 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.545871 | orchestrator | 2025-07-12 13:52:06.545881 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-07-12 13:52:06.545887 | orchestrator | Saturday 12 July 2025 13:47:18 +0000 (0:00:01.968) 0:01:34.545 ********* 2025-07-12 13:52:06.545894 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.545900 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.545906 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.545912 | orchestrator | 2025-07-12 13:52:06.545919 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-07-12 13:52:06.545925 | orchestrator | Saturday 12 July 2025 13:47:19 +0000 (0:00:00.315) 0:01:34.860 ********* 2025-07-12 13:52:06.545931 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.545936 | orchestrator | 2025-07-12 13:52:06.545942 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-07-12 13:52:06.545948 | orchestrator | Saturday 12 July 2025 13:47:19 +0000 (0:00:00.687) 0:01:35.548 ********* 2025-07-12 13:52:06.545955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 13:52:06.545964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 13:52:06.545971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 13:52:06.545977 | orchestrator | 2025-07-12 13:52:06.545982 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-07-12 13:52:06.545988 | orchestrator | Saturday 12 July 2025 13:47:22 +0000 (0:00:02.992) 0:01:38.540 ********* 2025-07-12 13:52:06.545997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 13:52:06.546007 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.546013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 13:52:06.546053 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.546059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 13:52:06.546066 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.546071 | orchestrator | 2025-07-12 13:52:06.546077 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-07-12 13:52:06.546083 | orchestrator | Saturday 12 July 2025 13:47:24 +0000 (0:00:01.309) 0:01:39.849 ********* 2025-07-12 13:52:06.546093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:52:06.546100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:52:06.546106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:52:06.546117 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.546124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:52:06.546130 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.546140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:52:06.546146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 13:52:06.546152 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.546158 | orchestrator | 2025-07-12 13:52:06.546164 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-07-12 13:52:06.546170 | orchestrator | Saturday 12 July 2025 13:47:25 +0000 (0:00:01.750) 0:01:41.599 ********* 2025-07-12 13:52:06.546176 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.546182 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.546187 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.546193 | orchestrator | 2025-07-12 13:52:06.546199 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-07-12 13:52:06.546205 | orchestrator | Saturday 12 July 2025 13:47:26 +0000 (0:00:00.953) 0:01:42.552 ********* 2025-07-12 13:52:06.546211 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.546216 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.546222 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.546228 | orchestrator | 2025-07-12 13:52:06.546234 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-07-12 13:52:06.546239 | orchestrator | Saturday 12 July 2025 13:47:28 +0000 (0:00:01.091) 0:01:43.643 ********* 2025-07-12 13:52:06.546245 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.546251 | orchestrator | 2025-07-12 13:52:06.546257 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-07-12 13:52:06.546262 | orchestrator | Saturday 12 July 2025 13:47:28 +0000 (0:00:00.914) 0:01:44.558 ********* 2025-07-12 13:52:06.546271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.546281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.546311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.546356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546381 | orchestrator | 2025-07-12 13:52:06.546387 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-07-12 13:52:06.546393 | orchestrator | Saturday 12 July 2025 13:47:32 +0000 (0:00:03.856) 0:01:48.414 ********* 2025-07-12 13:52:06.546399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.546405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546427 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.546435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.546446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546468 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.546474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.546480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546505 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.546511 | orchestrator | 2025-07-12 13:52:06.546517 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-07-12 13:52:06.546523 | orchestrator | Saturday 12 July 2025 13:47:33 +0000 (0:00:01.088) 0:01:49.503 ********* 2025-07-12 13:52:06.546529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:52:06.546553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:52:06.546559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:52:06.546565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:52:06.546571 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.546577 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.546583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:52:06.546589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 13:52:06.546595 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.546601 | orchestrator | 2025-07-12 13:52:06.546607 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-07-12 13:52:06.546617 | orchestrator | Saturday 12 July 2025 13:47:35 +0000 (0:00:01.216) 0:01:50.719 ********* 2025-07-12 13:52:06.546622 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.546628 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.546634 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.546640 | orchestrator | 2025-07-12 13:52:06.546646 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-07-12 13:52:06.546651 | orchestrator | Saturday 12 July 2025 13:47:36 +0000 (0:00:01.341) 0:01:52.061 ********* 2025-07-12 13:52:06.546657 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.546663 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.546669 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.546675 | orchestrator | 2025-07-12 13:52:06.546681 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-07-12 13:52:06.546686 | orchestrator | Saturday 12 July 2025 13:47:38 +0000 (0:00:02.252) 0:01:54.313 ********* 2025-07-12 13:52:06.546692 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.546698 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.546704 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.546710 | orchestrator | 2025-07-12 13:52:06.546715 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-07-12 13:52:06.546721 | orchestrator | Saturday 12 July 2025 13:47:39 +0000 (0:00:00.540) 0:01:54.853 ********* 2025-07-12 13:52:06.546727 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.546733 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.546741 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.546747 | orchestrator | 2025-07-12 13:52:06.546753 | orchestrator | TASK [include_role : designate] ************************************************ 2025-07-12 13:52:06.546758 | orchestrator | Saturday 12 July 2025 13:47:39 +0000 (0:00:00.316) 0:01:55.169 ********* 2025-07-12 13:52:06.546764 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.546770 | orchestrator | 2025-07-12 13:52:06.546776 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-07-12 13:52:06.546781 | orchestrator | Saturday 12 July 2025 13:47:40 +0000 (0:00:00.787) 0:01:55.957 ********* 2025-07-12 13:52:06.546787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 13:52:06.546805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:52:06.546812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 13:52:06.546859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:52:06.546868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 13:52:06.546948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:52:06.546955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.546988 | orchestrator | 2025-07-12 13:52:06.546994 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-07-12 13:52:06.547003 | orchestrator | Saturday 12 July 2025 13:47:45 +0000 (0:00:04.813) 0:02:00.771 ********* 2025-07-12 13:52:06.547013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 13:52:06.547019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:52:06.547025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547116 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.547122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 13:52:06.547128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:52:06.547137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547176 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.547182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 13:52:06.547188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 13:52:06.547197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.547237 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.547243 | orchestrator | 2025-07-12 13:52:06.547249 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-07-12 13:52:06.547254 | orchestrator | Saturday 12 July 2025 13:47:46 +0000 (0:00:00.908) 0:02:01.679 ********* 2025-07-12 13:52:06.547261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:52:06.547267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:52:06.547273 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.547279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:52:06.547285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:52:06.547290 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.547299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:52:06.547305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 13:52:06.547311 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.547317 | orchestrator | 2025-07-12 13:52:06.547323 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-07-12 13:52:06.547328 | orchestrator | Saturday 12 July 2025 13:47:47 +0000 (0:00:01.148) 0:02:02.827 ********* 2025-07-12 13:52:06.547338 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.547344 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.547350 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.547355 | orchestrator | 2025-07-12 13:52:06.547361 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-07-12 13:52:06.547367 | orchestrator | Saturday 12 July 2025 13:47:48 +0000 (0:00:01.732) 0:02:04.560 ********* 2025-07-12 13:52:06.547373 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.547379 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.547384 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.547390 | orchestrator | 2025-07-12 13:52:06.547396 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-07-12 13:52:06.547402 | orchestrator | Saturday 12 July 2025 13:47:50 +0000 (0:00:01.997) 0:02:06.558 ********* 2025-07-12 13:52:06.547407 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.547413 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.547419 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.547425 | orchestrator | 2025-07-12 13:52:06.547430 | orchestrator | TASK [include_role : glance] *************************************************** 2025-07-12 13:52:06.547436 | orchestrator | Saturday 12 July 2025 13:47:51 +0000 (0:00:00.302) 0:02:06.860 ********* 2025-07-12 13:52:06.547442 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.547448 | orchestrator | 2025-07-12 13:52:06.547454 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-07-12 13:52:06.547459 | orchestrator | Saturday 12 July 2025 13:47:52 +0000 (0:00:00.802) 0:02:07.663 ********* 2025-07-12 13:52:06.547472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 13:52:06.547483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.547498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 13:52:06.547508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.547523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 13:52:06.547545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.547556 | orchestrator | 2025-07-12 13:52:06.547562 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-07-12 13:52:06.547568 | orchestrator | Saturday 12 July 2025 13:47:56 +0000 (0:00:04.172) 0:02:11.836 ********* 2025-07-12 13:52:06.547579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 13:52:06.547586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.547596 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.547605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 13:52:06.547616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.547623 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.547635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 13:52:06.547647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.547654 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.547660 | orchestrator | 2025-07-12 13:52:06.547665 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-07-12 13:52:06.547671 | orchestrator | Saturday 12 July 2025 13:47:59 +0000 (0:00:02.812) 0:02:14.648 ********* 2025-07-12 13:52:06.547678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:52:06.547691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:52:06.547698 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.547704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:52:06.547710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:52:06.547716 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.547722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:52:06.547732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 13:52:06.547738 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.547744 | orchestrator | 2025-07-12 13:52:06.547750 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-07-12 13:52:06.547756 | orchestrator | Saturday 12 July 2025 13:48:01 +0000 (0:00:02.947) 0:02:17.596 ********* 2025-07-12 13:52:06.547762 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.547768 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.547774 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.547780 | orchestrator | 2025-07-12 13:52:06.547786 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-07-12 13:52:06.547792 | orchestrator | Saturday 12 July 2025 13:48:03 +0000 (0:00:01.521) 0:02:19.118 ********* 2025-07-12 13:52:06.547797 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.547809 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.547815 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.547821 | orchestrator | 2025-07-12 13:52:06.547827 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-07-12 13:52:06.547833 | orchestrator | Saturday 12 July 2025 13:48:05 +0000 (0:00:01.995) 0:02:21.113 ********* 2025-07-12 13:52:06.547839 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.547844 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.547850 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.547856 | orchestrator | 2025-07-12 13:52:06.547862 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-07-12 13:52:06.547868 | orchestrator | Saturday 12 July 2025 13:48:05 +0000 (0:00:00.343) 0:02:21.457 ********* 2025-07-12 13:52:06.547874 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.547880 | orchestrator | 2025-07-12 13:52:06.547885 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-07-12 13:52:06.547891 | orchestrator | Saturday 12 July 2025 13:48:06 +0000 (0:00:00.837) 0:02:22.295 ********* 2025-07-12 13:52:06.547900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 13:52:06.547907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 13:52:06.547913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 13:52:06.547919 | orchestrator | 2025-07-12 13:52:06.547925 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-07-12 13:52:06.547931 | orchestrator | Saturday 12 July 2025 13:48:09 +0000 (0:00:03.219) 0:02:25.515 ********* 2025-07-12 13:52:06.547941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 13:52:06.547952 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.547958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 13:52:06.547964 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.547970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 13:52:06.547976 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.547982 | orchestrator | 2025-07-12 13:52:06.547988 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-07-12 13:52:06.547994 | orchestrator | Saturday 12 July 2025 13:48:10 +0000 (0:00:00.394) 0:02:25.910 ********* 2025-07-12 13:52:06.548000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:52:06.548009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:52:06.548015 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.548021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:52:06.548027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:52:06.548093 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.548101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:52:06.548107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 13:52:06.548113 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.548119 | orchestrator | 2025-07-12 13:52:06.548125 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-07-12 13:52:06.548131 | orchestrator | Saturday 12 July 2025 13:48:10 +0000 (0:00:00.646) 0:02:26.557 ********* 2025-07-12 13:52:06.548137 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.548142 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.548148 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.548154 | orchestrator | 2025-07-12 13:52:06.548164 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-07-12 13:52:06.548170 | orchestrator | Saturday 12 July 2025 13:48:12 +0000 (0:00:01.682) 0:02:28.239 ********* 2025-07-12 13:52:06.548176 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.548182 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.548188 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.548193 | orchestrator | 2025-07-12 13:52:06.548199 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-07-12 13:52:06.548205 | orchestrator | Saturday 12 July 2025 13:48:14 +0000 (0:00:01.999) 0:02:30.239 ********* 2025-07-12 13:52:06.548211 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.548217 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.548226 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.548233 | orchestrator | 2025-07-12 13:52:06.548239 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-07-12 13:52:06.548244 | orchestrator | Saturday 12 July 2025 13:48:14 +0000 (0:00:00.339) 0:02:30.578 ********* 2025-07-12 13:52:06.548250 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.548256 | orchestrator | 2025-07-12 13:52:06.548262 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-07-12 13:52:06.548268 | orchestrator | Saturday 12 July 2025 13:48:15 +0000 (0:00:00.880) 0:02:31.458 ********* 2025-07-12 13:52:06.548278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:52:06.548321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:52:06.548337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:52:06.548344 | orchestrator | 2025-07-12 13:52:06.548350 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-07-12 13:52:06.548356 | orchestrator | Saturday 12 July 2025 13:48:19 +0000 (0:00:04.108) 0:02:35.567 ********* 2025-07-12 13:52:06.548372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:52:06.548379 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.548385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:52:06.548396 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.548421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:52:06.548432 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.548441 | orchestrator | 2025-07-12 13:52:06.548447 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-07-12 13:52:06.548453 | orchestrator | Saturday 12 July 2025 13:48:20 +0000 (0:00:00.759) 0:02:36.326 ********* 2025-07-12 13:52:06.548459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:52:06.548465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:52:06.548475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:52:06.548485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:52:06.548492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 13:52:06.548498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:52:06.548504 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.548510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:52:06.548516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:52:06.548526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:52:06.548547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:52:06.548554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 13:52:06.548560 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.548566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:52:06.548572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 13:52:06.548578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 13:52:06.548583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 13:52:06.548589 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.548595 | orchestrator | 2025-07-12 13:52:06.548605 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-07-12 13:52:06.548644 | orchestrator | Saturday 12 July 2025 13:48:21 +0000 (0:00:00.907) 0:02:37.233 ********* 2025-07-12 13:52:06.548650 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.548685 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.548702 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.548708 | orchestrator | 2025-07-12 13:52:06.548714 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-07-12 13:52:06.548720 | orchestrator | Saturday 12 July 2025 13:48:23 +0000 (0:00:01.638) 0:02:38.872 ********* 2025-07-12 13:52:06.548725 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.548731 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.548737 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.548743 | orchestrator | 2025-07-12 13:52:06.548762 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-07-12 13:52:06.548769 | orchestrator | Saturday 12 July 2025 13:48:25 +0000 (0:00:02.144) 0:02:41.017 ********* 2025-07-12 13:52:06.548775 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.548780 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.548786 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.548792 | orchestrator | 2025-07-12 13:52:06.548798 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-07-12 13:52:06.548803 | orchestrator | Saturday 12 July 2025 13:48:25 +0000 (0:00:00.320) 0:02:41.337 ********* 2025-07-12 13:52:06.548809 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.548815 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.548821 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.548826 | orchestrator | 2025-07-12 13:52:06.548832 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-07-12 13:52:06.548838 | orchestrator | Saturday 12 July 2025 13:48:26 +0000 (0:00:00.318) 0:02:41.656 ********* 2025-07-12 13:52:06.548844 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.548849 | orchestrator | 2025-07-12 13:52:06.548855 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-07-12 13:52:06.548861 | orchestrator | Saturday 12 July 2025 13:48:27 +0000 (0:00:01.290) 0:02:42.947 ********* 2025-07-12 13:52:06.548872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:52:06.548880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:52:06.548895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:52:06.548905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:52:06.548911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:52:06.548918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:52:06.548928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:52:06.548935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:52:06.548945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:52:06.548951 | orchestrator | 2025-07-12 13:52:06.548957 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-07-12 13:52:06.548966 | orchestrator | Saturday 12 July 2025 13:48:30 +0000 (0:00:03.558) 0:02:46.506 ********* 2025-07-12 13:52:06.548972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:52:06.548979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:52:06.548990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:52:06.548997 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.549003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:52:06.549014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:52:06.549023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:52:06.549029 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.549036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:52:06.549046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:52:06.549053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:52:06.549063 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.549069 | orchestrator | 2025-07-12 13:52:06.549075 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-07-12 13:52:06.549081 | orchestrator | Saturday 12 July 2025 13:48:31 +0000 (0:00:00.615) 0:02:47.121 ********* 2025-07-12 13:52:06.549087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:52:06.549094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:52:06.549101 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.549107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:52:06.549113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:52:06.549121 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.549127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:52:06.549134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 13:52:06.549140 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.549145 | orchestrator | 2025-07-12 13:52:06.549151 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-07-12 13:52:06.549157 | orchestrator | Saturday 12 July 2025 13:48:32 +0000 (0:00:01.107) 0:02:48.229 ********* 2025-07-12 13:52:06.549163 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.549169 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.549175 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.549180 | orchestrator | 2025-07-12 13:52:06.549186 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-07-12 13:52:06.549192 | orchestrator | Saturday 12 July 2025 13:48:33 +0000 (0:00:01.347) 0:02:49.577 ********* 2025-07-12 13:52:06.549198 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.549204 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.549210 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.549215 | orchestrator | 2025-07-12 13:52:06.549221 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-07-12 13:52:06.549227 | orchestrator | Saturday 12 July 2025 13:48:36 +0000 (0:00:02.113) 0:02:51.690 ********* 2025-07-12 13:52:06.549233 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.549239 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.549244 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.549250 | orchestrator | 2025-07-12 13:52:06.549256 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-07-12 13:52:06.549268 | orchestrator | Saturday 12 July 2025 13:48:36 +0000 (0:00:00.330) 0:02:52.021 ********* 2025-07-12 13:52:06.549274 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.549280 | orchestrator | 2025-07-12 13:52:06.549286 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-07-12 13:52:06.549292 | orchestrator | Saturday 12 July 2025 13:48:37 +0000 (0:00:01.195) 0:02:53.217 ********* 2025-07-12 13:52:06.549302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 13:52:06.549333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 13:52:06.549350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 13:52:06.549372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549378 | orchestrator | 2025-07-12 13:52:06.549384 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-07-12 13:52:06.549390 | orchestrator | Saturday 12 July 2025 13:48:41 +0000 (0:00:03.863) 0:02:57.081 ********* 2025-07-12 13:52:06.549396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 13:52:06.549405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549411 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.549417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 13:52:06.549432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549438 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.549444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 13:52:06.549450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549494 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.549501 | orchestrator | 2025-07-12 13:52:06.549507 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-07-12 13:52:06.549513 | orchestrator | Saturday 12 July 2025 13:48:42 +0000 (0:00:00.665) 0:02:57.746 ********* 2025-07-12 13:52:06.549519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:52:06.549528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:52:06.549607 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.549613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:52:06.549619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:52:06.549630 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.549636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:52:06.549642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 13:52:06.549648 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.549654 | orchestrator | 2025-07-12 13:52:06.549659 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-07-12 13:52:06.549665 | orchestrator | Saturday 12 July 2025 13:48:43 +0000 (0:00:01.364) 0:02:59.110 ********* 2025-07-12 13:52:06.549671 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.549677 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.549683 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.549688 | orchestrator | 2025-07-12 13:52:06.549694 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-07-12 13:52:06.549700 | orchestrator | Saturday 12 July 2025 13:48:44 +0000 (0:00:01.290) 0:03:00.401 ********* 2025-07-12 13:52:06.549706 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.549712 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.549717 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.549723 | orchestrator | 2025-07-12 13:52:06.549729 | orchestrator | TASK [include_role : manila] *************************************************** 2025-07-12 13:52:06.549735 | orchestrator | Saturday 12 July 2025 13:48:46 +0000 (0:00:02.196) 0:03:02.598 ********* 2025-07-12 13:52:06.549745 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.549751 | orchestrator | 2025-07-12 13:52:06.549757 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-07-12 13:52:06.549763 | orchestrator | Saturday 12 July 2025 13:48:48 +0000 (0:00:01.078) 0:03:03.677 ********* 2025-07-12 13:52:06.549770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 13:52:06.549776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 13:52:06.549814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 13:52:06.549821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549874 | orchestrator | 2025-07-12 13:52:06.549895 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-07-12 13:52:06.549908 | orchestrator | Saturday 12 July 2025 13:48:51 +0000 (0:00:03.496) 0:03:07.174 ********* 2025-07-12 13:52:06.549913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 13:52:06.549919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549941 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.549947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 13:52:06.549956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.549976 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 13:52:06.550039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.550045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.550055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.550060 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550066 | orchestrator | 2025-07-12 13:52:06.550071 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-07-12 13:52:06.550076 | orchestrator | Saturday 12 July 2025 13:48:52 +0000 (0:00:00.672) 0:03:07.846 ********* 2025-07-12 13:52:06.550081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:52:06.550087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:52:06.550092 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:52:06.550102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:52:06.550111 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:52:06.550121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 13:52:06.550127 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550132 | orchestrator | 2025-07-12 13:52:06.550137 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-07-12 13:52:06.550142 | orchestrator | Saturday 12 July 2025 13:48:53 +0000 (0:00:00.847) 0:03:08.694 ********* 2025-07-12 13:52:06.550147 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.550152 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.550157 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.550162 | orchestrator | 2025-07-12 13:52:06.550167 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-07-12 13:52:06.550173 | orchestrator | Saturday 12 July 2025 13:48:54 +0000 (0:00:01.616) 0:03:10.311 ********* 2025-07-12 13:52:06.550178 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.550183 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.550191 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.550196 | orchestrator | 2025-07-12 13:52:06.550201 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-07-12 13:52:06.550206 | orchestrator | Saturday 12 July 2025 13:48:56 +0000 (0:00:02.114) 0:03:12.426 ********* 2025-07-12 13:52:06.550211 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.550216 | orchestrator | 2025-07-12 13:52:06.550221 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-07-12 13:52:06.550226 | orchestrator | Saturday 12 July 2025 13:48:57 +0000 (0:00:01.073) 0:03:13.499 ********* 2025-07-12 13:52:06.550232 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:52:06.550237 | orchestrator | 2025-07-12 13:52:06.550242 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-07-12 13:52:06.550247 | orchestrator | Saturday 12 July 2025 13:49:00 +0000 (0:00:03.104) 0:03:16.604 ********* 2025-07-12 13:52:06.550263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:52:06.550274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:52:06.550296 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:52:06.550316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:52:06.550326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:52:06.550354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:52:06.550361 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550366 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550403 | orchestrator | 2025-07-12 13:52:06.550409 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-07-12 13:52:06.550415 | orchestrator | Saturday 12 July 2025 13:49:04 +0000 (0:00:03.178) 0:03:19.783 ********* 2025-07-12 13:52:06.550423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:52:06.550437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:52:06.550443 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:52:06.550459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:52:06.550464 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:52:06.550483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 13:52:06.550488 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550493 | orchestrator | 2025-07-12 13:52:06.550499 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-07-12 13:52:06.550504 | orchestrator | Saturday 12 July 2025 13:49:06 +0000 (0:00:02.376) 0:03:22.159 ********* 2025-07-12 13:52:06.550509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:52:06.550517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:52:06.550523 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:52:06.550547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:52:06.550556 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:52:06.550571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 13:52:06.550577 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550582 | orchestrator | 2025-07-12 13:52:06.550587 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-07-12 13:52:06.550592 | orchestrator | Saturday 12 July 2025 13:49:09 +0000 (0:00:02.711) 0:03:24.870 ********* 2025-07-12 13:52:06.550597 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.550603 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.550608 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.550613 | orchestrator | 2025-07-12 13:52:06.550618 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-07-12 13:52:06.550623 | orchestrator | Saturday 12 July 2025 13:49:11 +0000 (0:00:02.154) 0:03:27.025 ********* 2025-07-12 13:52:06.550628 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550633 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550638 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550643 | orchestrator | 2025-07-12 13:52:06.550648 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-07-12 13:52:06.550654 | orchestrator | Saturday 12 July 2025 13:49:12 +0000 (0:00:01.402) 0:03:28.427 ********* 2025-07-12 13:52:06.550659 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550664 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550715 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550721 | orchestrator | 2025-07-12 13:52:06.550726 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-07-12 13:52:06.550731 | orchestrator | Saturday 12 July 2025 13:49:13 +0000 (0:00:00.323) 0:03:28.750 ********* 2025-07-12 13:52:06.550737 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.550742 | orchestrator | 2025-07-12 13:52:06.550756 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-07-12 13:52:06.550762 | orchestrator | Saturday 12 July 2025 13:49:14 +0000 (0:00:01.072) 0:03:29.823 ********* 2025-07-12 13:52:06.550767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 13:52:06.550777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 13:52:06.550787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 13:52:06.550793 | orchestrator | 2025-07-12 13:52:06.550798 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-07-12 13:52:06.550803 | orchestrator | Saturday 12 July 2025 13:49:15 +0000 (0:00:01.721) 0:03:31.545 ********* 2025-07-12 13:52:06.550808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 13:52:06.550814 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 13:52:06.550827 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 13:52:06.550861 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550866 | orchestrator | 2025-07-12 13:52:06.550871 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-07-12 13:52:06.550876 | orchestrator | Saturday 12 July 2025 13:49:16 +0000 (0:00:00.415) 0:03:31.960 ********* 2025-07-12 13:52:06.550882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 13:52:06.550888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 13:52:06.550893 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550898 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 13:52:06.550930 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550936 | orchestrator | 2025-07-12 13:52:06.550941 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-07-12 13:52:06.550946 | orchestrator | Saturday 12 July 2025 13:49:16 +0000 (0:00:00.598) 0:03:32.558 ********* 2025-07-12 13:52:06.550951 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550956 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550961 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550966 | orchestrator | 2025-07-12 13:52:06.550972 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-07-12 13:52:06.550977 | orchestrator | Saturday 12 July 2025 13:49:17 +0000 (0:00:00.715) 0:03:33.274 ********* 2025-07-12 13:52:06.550982 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.550987 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.550992 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.550997 | orchestrator | 2025-07-12 13:52:06.551002 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-07-12 13:52:06.551007 | orchestrator | Saturday 12 July 2025 13:49:18 +0000 (0:00:01.224) 0:03:34.498 ********* 2025-07-12 13:52:06.551013 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.551018 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.551023 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.551028 | orchestrator | 2025-07-12 13:52:06.551033 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-07-12 13:52:06.551038 | orchestrator | Saturday 12 July 2025 13:49:19 +0000 (0:00:00.290) 0:03:34.789 ********* 2025-07-12 13:52:06.551065 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.551070 | orchestrator | 2025-07-12 13:52:06.551076 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-07-12 13:52:06.551081 | orchestrator | Saturday 12 July 2025 13:49:20 +0000 (0:00:01.423) 0:03:36.212 ********* 2025-07-12 13:52:06.551093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 13:52:06.551099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:52:06.551132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 13:52:06.551149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:52:06.551240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.551297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.551370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 13:52:06.551393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:52:06.551421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.551514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551525 | orchestrator | 2025-07-12 13:52:06.551543 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-07-12 13:52:06.551549 | orchestrator | Saturday 12 July 2025 13:49:24 +0000 (0:00:04.359) 0:03:40.572 ********* 2025-07-12 13:52:06.551558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 13:52:06.551567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:52:06.551596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 13:52:06.551606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 13:52:06.551642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:52:06.551676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 13:52:06.551730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.551820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551868 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.551877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.551906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 13:52:06.551911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 13:52:06.551943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.551948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 13:52:06.551953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.551959 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.551964 | orchestrator | 2025-07-12 13:52:06.551969 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-07-12 13:52:06.551975 | orchestrator | Saturday 12 July 2025 13:49:26 +0000 (0:00:01.470) 0:03:42.043 ********* 2025-07-12 13:52:06.551980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:52:06.552007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:52:06.552012 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.552021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:52:06.552030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:52:06.552035 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.552040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:52:06.552045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 13:52:06.552051 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.552056 | orchestrator | 2025-07-12 13:52:06.552061 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-07-12 13:52:06.552066 | orchestrator | Saturday 12 July 2025 13:49:28 +0000 (0:00:02.032) 0:03:44.075 ********* 2025-07-12 13:52:06.552071 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.552076 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.552088 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.552093 | orchestrator | 2025-07-12 13:52:06.552098 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-07-12 13:52:06.552103 | orchestrator | Saturday 12 July 2025 13:49:29 +0000 (0:00:01.243) 0:03:45.319 ********* 2025-07-12 13:52:06.552109 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.552114 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.552130 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.552135 | orchestrator | 2025-07-12 13:52:06.552140 | orchestrator | TASK [include_role : placement] ************************************************ 2025-07-12 13:52:06.552145 | orchestrator | Saturday 12 July 2025 13:49:31 +0000 (0:00:02.051) 0:03:47.370 ********* 2025-07-12 13:52:06.552150 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.552156 | orchestrator | 2025-07-12 13:52:06.552161 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-07-12 13:52:06.552166 | orchestrator | Saturday 12 July 2025 13:49:32 +0000 (0:00:01.193) 0:03:48.564 ********* 2025-07-12 13:52:06.552176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.552182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.552196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 2025-07-12 13:52:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:06.552203 | orchestrator | 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.552208 | orchestrator | 2025-07-12 13:52:06.552214 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-07-12 13:52:06.552219 | orchestrator | Saturday 12 July 2025 13:49:36 +0000 (0:00:03.378) 0:03:51.943 ********* 2025-07-12 13:52:06.552224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.552229 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.552253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.552260 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.552265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.552275 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.552289 | orchestrator | 2025-07-12 13:52:06.552294 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-07-12 13:52:06.552307 | orchestrator | Saturday 12 July 2025 13:49:36 +0000 (0:00:00.478) 0:03:52.422 ********* 2025-07-12 13:52:06.552313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552324 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.552332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552343 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.552348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552359 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.552364 | orchestrator | 2025-07-12 13:52:06.552369 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-07-12 13:52:06.552374 | orchestrator | Saturday 12 July 2025 13:49:37 +0000 (0:00:00.772) 0:03:53.195 ********* 2025-07-12 13:52:06.552380 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.552385 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.552390 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.552395 | orchestrator | 2025-07-12 13:52:06.552400 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-07-12 13:52:06.552405 | orchestrator | Saturday 12 July 2025 13:49:39 +0000 (0:00:01.588) 0:03:54.783 ********* 2025-07-12 13:52:06.552411 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.552416 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.552421 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.552426 | orchestrator | 2025-07-12 13:52:06.552431 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-07-12 13:52:06.552436 | orchestrator | Saturday 12 July 2025 13:49:41 +0000 (0:00:01.996) 0:03:56.779 ********* 2025-07-12 13:52:06.552442 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.552447 | orchestrator | 2025-07-12 13:52:06.552452 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-07-12 13:52:06.552457 | orchestrator | Saturday 12 July 2025 13:49:42 +0000 (0:00:01.236) 0:03:58.016 ********* 2025-07-12 13:52:06.552468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.552478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.552502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.552527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552578 | orchestrator | 2025-07-12 13:52:06.552583 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-07-12 13:52:06.552589 | orchestrator | Saturday 12 July 2025 13:49:46 +0000 (0:00:04.266) 0:04:02.282 ********* 2025-07-12 13:52:06.552598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.552607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552618 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.552627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.552632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552649 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.552676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.552683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.552697 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.552702 | orchestrator | 2025-07-12 13:52:06.552708 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-07-12 13:52:06.552713 | orchestrator | Saturday 12 July 2025 13:49:47 +0000 (0:00:01.012) 0:04:03.294 ********* 2025-07-12 13:52:06.552718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552744 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.552781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552808 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.552813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 13:52:06.552834 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.552839 | orchestrator | 2025-07-12 13:52:06.552845 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-07-12 13:52:06.552850 | orchestrator | Saturday 12 July 2025 13:49:48 +0000 (0:00:00.873) 0:04:04.168 ********* 2025-07-12 13:52:06.552855 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.552860 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.552866 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.552871 | orchestrator | 2025-07-12 13:52:06.552876 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-07-12 13:52:06.552881 | orchestrator | Saturday 12 July 2025 13:49:50 +0000 (0:00:01.754) 0:04:05.922 ********* 2025-07-12 13:52:06.552886 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.552891 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.552897 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.552902 | orchestrator | 2025-07-12 13:52:06.552907 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-07-12 13:52:06.552912 | orchestrator | Saturday 12 July 2025 13:49:52 +0000 (0:00:02.108) 0:04:08.031 ********* 2025-07-12 13:52:06.552917 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.552922 | orchestrator | 2025-07-12 13:52:06.552928 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-07-12 13:52:06.552936 | orchestrator | Saturday 12 July 2025 13:49:54 +0000 (0:00:01.639) 0:04:09.671 ********* 2025-07-12 13:52:06.552941 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-07-12 13:52:06.552951 | orchestrator | 2025-07-12 13:52:06.552956 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-07-12 13:52:06.552961 | orchestrator | Saturday 12 July 2025 13:49:55 +0000 (0:00:01.082) 0:04:10.754 ********* 2025-07-12 13:52:06.552967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 13:52:06.552972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 13:52:06.552981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 13:52:06.552987 | orchestrator | 2025-07-12 13:52:06.552992 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-07-12 13:52:06.552997 | orchestrator | Saturday 12 July 2025 13:49:59 +0000 (0:00:04.092) 0:04:14.846 ********* 2025-07-12 13:52:06.553003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:52:06.553008 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.553013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:52:06.553019 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.553024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:52:06.553030 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.553035 | orchestrator | 2025-07-12 13:52:06.553040 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-07-12 13:52:06.553050 | orchestrator | Saturday 12 July 2025 13:50:00 +0000 (0:00:01.305) 0:04:16.151 ********* 2025-07-12 13:52:06.553058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:52:06.553064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:52:06.553070 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.553075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:52:06.553081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:52:06.553086 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.553091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:52:06.553097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 13:52:06.553102 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.553107 | orchestrator | 2025-07-12 13:52:06.553113 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 13:52:06.553118 | orchestrator | Saturday 12 July 2025 13:50:02 +0000 (0:00:01.815) 0:04:17.967 ********* 2025-07-12 13:52:06.553123 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.553128 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.553133 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.553139 | orchestrator | 2025-07-12 13:52:06.553144 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 13:52:06.553149 | orchestrator | Saturday 12 July 2025 13:50:04 +0000 (0:00:02.423) 0:04:20.391 ********* 2025-07-12 13:52:06.553154 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.553163 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.553168 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.553173 | orchestrator | 2025-07-12 13:52:06.553179 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-07-12 13:52:06.553184 | orchestrator | Saturday 12 July 2025 13:50:07 +0000 (0:00:03.072) 0:04:23.463 ********* 2025-07-12 13:52:06.553189 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-07-12 13:52:06.553194 | orchestrator | 2025-07-12 13:52:06.553200 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-07-12 13:52:06.553205 | orchestrator | Saturday 12 July 2025 13:50:08 +0000 (0:00:00.846) 0:04:24.310 ********* 2025-07-12 13:52:06.553210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:52:06.553240 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.553252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:52:06.553258 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.553266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:52:06.553272 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.553287 | orchestrator | 2025-07-12 13:52:06.553293 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-07-12 13:52:06.553298 | orchestrator | Saturday 12 July 2025 13:50:10 +0000 (0:00:01.479) 0:04:25.789 ********* 2025-07-12 13:52:06.553304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:52:06.553309 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.553315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:52:06.553320 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.553329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 13:52:06.553334 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.553340 | orchestrator | 2025-07-12 13:52:06.553345 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-07-12 13:52:06.553350 | orchestrator | Saturday 12 July 2025 13:50:11 +0000 (0:00:01.758) 0:04:27.547 ********* 2025-07-12 13:52:06.553355 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.553361 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.553366 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.553371 | orchestrator | 2025-07-12 13:52:06.553376 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 13:52:06.553387 | orchestrator | Saturday 12 July 2025 13:50:13 +0000 (0:00:01.328) 0:04:28.876 ********* 2025-07-12 13:52:06.553392 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.553397 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.553403 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.553408 | orchestrator | 2025-07-12 13:52:06.553413 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 13:52:06.553418 | orchestrator | Saturday 12 July 2025 13:50:15 +0000 (0:00:02.353) 0:04:31.230 ********* 2025-07-12 13:52:06.553423 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.553429 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.553434 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.553439 | orchestrator | 2025-07-12 13:52:06.553444 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-07-12 13:52:06.553450 | orchestrator | Saturday 12 July 2025 13:50:18 +0000 (0:00:03.209) 0:04:34.439 ********* 2025-07-12 13:52:06.553455 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-07-12 13:52:06.553460 | orchestrator | 2025-07-12 13:52:06.553465 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-07-12 13:52:06.553471 | orchestrator | Saturday 12 July 2025 13:50:19 +0000 (0:00:01.077) 0:04:35.517 ********* 2025-07-12 13:52:06.553476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:52:06.553481 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.553489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:52:06.553495 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.553500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:52:06.553506 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.553511 | orchestrator | 2025-07-12 13:52:06.553516 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-07-12 13:52:06.553522 | orchestrator | Saturday 12 July 2025 13:50:20 +0000 (0:00:01.073) 0:04:36.590 ********* 2025-07-12 13:52:06.553527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:52:06.553572 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.553582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:52:06.553588 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.553593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 13:52:06.553599 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.553604 | orchestrator | 2025-07-12 13:52:06.553609 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-07-12 13:52:06.553614 | orchestrator | Saturday 12 July 2025 13:50:22 +0000 (0:00:01.293) 0:04:37.884 ********* 2025-07-12 13:52:06.553620 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.553625 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.553630 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.553635 | orchestrator | 2025-07-12 13:52:06.553640 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 13:52:06.553645 | orchestrator | Saturday 12 July 2025 13:50:24 +0000 (0:00:01.793) 0:04:39.677 ********* 2025-07-12 13:52:06.553651 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.553656 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.553661 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.553666 | orchestrator | 2025-07-12 13:52:06.553671 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 13:52:06.553677 | orchestrator | Saturday 12 July 2025 13:50:26 +0000 (0:00:02.374) 0:04:42.052 ********* 2025-07-12 13:52:06.553682 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.553687 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.553692 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.553697 | orchestrator | 2025-07-12 13:52:06.553702 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-07-12 13:52:06.553708 | orchestrator | Saturday 12 July 2025 13:50:29 +0000 (0:00:03.342) 0:04:45.394 ********* 2025-07-12 13:52:06.553713 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.553718 | orchestrator | 2025-07-12 13:52:06.553723 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-07-12 13:52:06.553728 | orchestrator | Saturday 12 July 2025 13:50:31 +0000 (0:00:01.298) 0:04:46.692 ********* 2025-07-12 13:52:06.553753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.553763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:52:06.553778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.553794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.553803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:52:06.553818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.553832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:52:06.553843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.553851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.553869 | orchestrator | 2025-07-12 13:52:06.553875 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-07-12 13:52:06.553880 | orchestrator | Saturday 12 July 2025 13:50:34 +0000 (0:00:03.656) 0:04:50.349 ********* 2025-07-12 13:52:06.553888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.553893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:52:06.553898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.553920 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.553925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.553933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:52:06.553939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.553949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.553960 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.553965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.553971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 13:52:06.554056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.554074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 13:52:06.554080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 13:52:06.554094 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.554099 | orchestrator | 2025-07-12 13:52:06.554127 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-07-12 13:52:06.554132 | orchestrator | Saturday 12 July 2025 13:50:35 +0000 (0:00:00.735) 0:04:51.085 ********* 2025-07-12 13:52:06.554138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:52:06.554149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:52:06.554154 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.554162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:52:06.554167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:52:06.554172 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.554177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:52:06.554182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 13:52:06.554187 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.554192 | orchestrator | 2025-07-12 13:52:06.554197 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-07-12 13:52:06.554202 | orchestrator | Saturday 12 July 2025 13:50:36 +0000 (0:00:00.872) 0:04:51.958 ********* 2025-07-12 13:52:06.554206 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.554211 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.554216 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.554221 | orchestrator | 2025-07-12 13:52:06.554226 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-07-12 13:52:06.554231 | orchestrator | Saturday 12 July 2025 13:50:38 +0000 (0:00:01.757) 0:04:53.715 ********* 2025-07-12 13:52:06.554235 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.554240 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.554245 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.554250 | orchestrator | 2025-07-12 13:52:06.554254 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-07-12 13:52:06.554259 | orchestrator | Saturday 12 July 2025 13:50:40 +0000 (0:00:02.103) 0:04:55.819 ********* 2025-07-12 13:52:06.554264 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.554269 | orchestrator | 2025-07-12 13:52:06.554274 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-07-12 13:52:06.554279 | orchestrator | Saturday 12 July 2025 13:50:41 +0000 (0:00:01.326) 0:04:57.145 ********* 2025-07-12 13:52:06.554296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:52:06.554302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:52:06.554314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:52:06.554320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:52:06.554339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:52:06.554345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:52:06.554355 | orchestrator | 2025-07-12 13:52:06.554360 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-07-12 13:52:06.554365 | orchestrator | Saturday 12 July 2025 13:50:46 +0000 (0:00:05.443) 0:05:02.589 ********* 2025-07-12 13:52:06.554372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:52:06.554378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:52:06.554383 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.554392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:52:06.554398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:52:06.554407 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.554414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:52:06.554420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:52:06.554425 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.554430 | orchestrator | 2025-07-12 13:52:06.554435 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-07-12 13:52:06.554440 | orchestrator | Saturday 12 July 2025 13:50:47 +0000 (0:00:01.036) 0:05:03.626 ********* 2025-07-12 13:52:06.554445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 13:52:06.554451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:52:06.554459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:52:06.554464 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.554470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 13:52:06.554478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:52:06.554483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:52:06.554489 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.554494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 13:52:06.554498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:52:06.554504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 13:52:06.554509 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.554514 | orchestrator | 2025-07-12 13:52:06.554518 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-07-12 13:52:06.554523 | orchestrator | Saturday 12 July 2025 13:50:48 +0000 (0:00:00.893) 0:05:04.520 ********* 2025-07-12 13:52:06.554528 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.554547 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.554552 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.554556 | orchestrator | 2025-07-12 13:52:06.554562 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-07-12 13:52:06.554566 | orchestrator | Saturday 12 July 2025 13:50:49 +0000 (0:00:00.423) 0:05:04.944 ********* 2025-07-12 13:52:06.554571 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.554576 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.554581 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.554586 | orchestrator | 2025-07-12 13:52:06.554593 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-07-12 13:52:06.554598 | orchestrator | Saturday 12 July 2025 13:50:50 +0000 (0:00:01.413) 0:05:06.357 ********* 2025-07-12 13:52:06.554603 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.554608 | orchestrator | 2025-07-12 13:52:06.554613 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-07-12 13:52:06.554618 | orchestrator | Saturday 12 July 2025 13:50:52 +0000 (0:00:01.724) 0:05:08.081 ********* 2025-07-12 13:52:06.554623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 13:52:06.554629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:52:06.554649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.554666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 13:52:06.554674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:52:06.554679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.554708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 13:52:06.554713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:52:06.554719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.554739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 13:52:06.554752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:52:06.554758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.554776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 13:52:06.554786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:52:06.554794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 13:52:06.554805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:52:06.554823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.554828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.554847 | orchestrator | 2025-07-12 13:52:06.554852 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-07-12 13:52:06.554857 | orchestrator | Saturday 12 July 2025 13:50:56 +0000 (0:00:04.204) 0:05:12.286 ********* 2025-07-12 13:52:06.554862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 13:52:06.554867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:52:06.554877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.554900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 13:52:06.554906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:52:06.554911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.554933 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.554938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 13:52:06.554946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:52:06.554952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.554970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 13:52:06.554982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:52:06.554987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.554996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.555001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 13:52:06.555006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.555011 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 13:52:06.555028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.555033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.555038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.555047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 13:52:06.555053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 13:52:06.555058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.555070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 13:52:06.555075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 13:52:06.555080 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555085 | orchestrator | 2025-07-12 13:52:06.555090 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-07-12 13:52:06.555095 | orchestrator | Saturday 12 July 2025 13:50:58 +0000 (0:00:01.568) 0:05:13.855 ********* 2025-07-12 13:52:06.555100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 13:52:06.555105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 13:52:06.555111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:52:06.555119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:52:06.555124 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 13:52:06.555134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 13:52:06.555140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:52:06.555145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:52:06.555153 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 13:52:06.555163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 13:52:06.555168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:52:06.555176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 13:52:06.555181 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555186 | orchestrator | 2025-07-12 13:52:06.555191 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-07-12 13:52:06.555196 | orchestrator | Saturday 12 July 2025 13:50:59 +0000 (0:00:00.998) 0:05:14.854 ********* 2025-07-12 13:52:06.555201 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555206 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555211 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555216 | orchestrator | 2025-07-12 13:52:06.555220 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-07-12 13:52:06.555225 | orchestrator | Saturday 12 July 2025 13:50:59 +0000 (0:00:00.426) 0:05:15.280 ********* 2025-07-12 13:52:06.555230 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555235 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555240 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555245 | orchestrator | 2025-07-12 13:52:06.555250 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-07-12 13:52:06.555255 | orchestrator | Saturday 12 July 2025 13:51:01 +0000 (0:00:01.703) 0:05:16.983 ********* 2025-07-12 13:52:06.555259 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.555264 | orchestrator | 2025-07-12 13:52:06.555269 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-07-12 13:52:06.555274 | orchestrator | Saturday 12 July 2025 13:51:03 +0000 (0:00:01.765) 0:05:18.749 ********* 2025-07-12 13:52:06.555282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:52:06.555288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:52:06.555298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 13:52:06.555303 | orchestrator | 2025-07-12 13:52:06.555311 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-07-12 13:52:06.555316 | orchestrator | Saturday 12 July 2025 13:51:05 +0000 (0:00:02.580) 0:05:21.329 ********* 2025-07-12 13:52:06.555321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 13:52:06.555326 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 13:52:06.555340 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 13:52:06.555354 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555359 | orchestrator | 2025-07-12 13:52:06.555364 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-07-12 13:52:06.555368 | orchestrator | Saturday 12 July 2025 13:51:06 +0000 (0:00:00.409) 0:05:21.739 ********* 2025-07-12 13:52:06.555374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 13:52:06.555379 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 13:52:06.555388 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 13:52:06.555398 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555403 | orchestrator | 2025-07-12 13:52:06.555408 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-07-12 13:52:06.555413 | orchestrator | Saturday 12 July 2025 13:51:07 +0000 (0:00:00.995) 0:05:22.734 ********* 2025-07-12 13:52:06.555420 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555425 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555430 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555435 | orchestrator | 2025-07-12 13:52:06.555440 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-07-12 13:52:06.555445 | orchestrator | Saturday 12 July 2025 13:51:07 +0000 (0:00:00.428) 0:05:23.163 ********* 2025-07-12 13:52:06.555450 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555455 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555460 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555465 | orchestrator | 2025-07-12 13:52:06.555470 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-07-12 13:52:06.555474 | orchestrator | Saturday 12 July 2025 13:51:08 +0000 (0:00:01.313) 0:05:24.477 ********* 2025-07-12 13:52:06.555479 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:52:06.555484 | orchestrator | 2025-07-12 13:52:06.555489 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-07-12 13:52:06.555494 | orchestrator | Saturday 12 July 2025 13:51:10 +0000 (0:00:01.750) 0:05:26.227 ********* 2025-07-12 13:52:06.555499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.555511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.555517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.555525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.555543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.555555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 13:52:06.555561 | orchestrator | 2025-07-12 13:52:06.555566 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-07-12 13:52:06.555571 | orchestrator | Saturday 12 July 2025 13:51:16 +0000 (0:00:06.003) 0:05:32.231 ********* 2025-07-12 13:52:06.555576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.555584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.555589 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.555605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.555610 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.555621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 13:52:06.555626 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555631 | orchestrator | 2025-07-12 13:52:06.555636 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-07-12 13:52:06.555643 | orchestrator | Saturday 12 July 2025 13:51:17 +0000 (0:00:00.634) 0:05:32.866 ********* 2025-07-12 13:52:06.555648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555671 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555700 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 13:52:06.555725 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555730 | orchestrator | 2025-07-12 13:52:06.555735 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-07-12 13:52:06.555740 | orchestrator | Saturday 12 July 2025 13:51:18 +0000 (0:00:01.702) 0:05:34.568 ********* 2025-07-12 13:52:06.555745 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.555750 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.555755 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.555760 | orchestrator | 2025-07-12 13:52:06.555765 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-07-12 13:52:06.555770 | orchestrator | Saturday 12 July 2025 13:51:20 +0000 (0:00:01.345) 0:05:35.914 ********* 2025-07-12 13:52:06.555774 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.555779 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.555784 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.555789 | orchestrator | 2025-07-12 13:52:06.555794 | orchestrator | TASK [include_role : swift] **************************************************** 2025-07-12 13:52:06.555799 | orchestrator | Saturday 12 July 2025 13:51:22 +0000 (0:00:02.191) 0:05:38.106 ********* 2025-07-12 13:52:06.555804 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555809 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555814 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555819 | orchestrator | 2025-07-12 13:52:06.555824 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-07-12 13:52:06.555832 | orchestrator | Saturday 12 July 2025 13:51:22 +0000 (0:00:00.351) 0:05:38.457 ********* 2025-07-12 13:52:06.555837 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555842 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555847 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555852 | orchestrator | 2025-07-12 13:52:06.555857 | orchestrator | TASK [include_role : trove] **************************************************** 2025-07-12 13:52:06.555866 | orchestrator | Saturday 12 July 2025 13:51:23 +0000 (0:00:00.625) 0:05:39.082 ********* 2025-07-12 13:52:06.555871 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555876 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555881 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555885 | orchestrator | 2025-07-12 13:52:06.555890 | orchestrator | TASK [include_role : venus] **************************************************** 2025-07-12 13:52:06.555895 | orchestrator | Saturday 12 July 2025 13:51:23 +0000 (0:00:00.331) 0:05:39.414 ********* 2025-07-12 13:52:06.555900 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555905 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555910 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555915 | orchestrator | 2025-07-12 13:52:06.555920 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-07-12 13:52:06.555924 | orchestrator | Saturday 12 July 2025 13:51:24 +0000 (0:00:00.339) 0:05:39.753 ********* 2025-07-12 13:52:06.555929 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555934 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555939 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555944 | orchestrator | 2025-07-12 13:52:06.555949 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-07-12 13:52:06.555954 | orchestrator | Saturday 12 July 2025 13:51:24 +0000 (0:00:00.396) 0:05:40.149 ********* 2025-07-12 13:52:06.555959 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.555964 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.555968 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.555973 | orchestrator | 2025-07-12 13:52:06.555978 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-07-12 13:52:06.555983 | orchestrator | Saturday 12 July 2025 13:51:25 +0000 (0:00:00.866) 0:05:41.016 ********* 2025-07-12 13:52:06.555988 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.555993 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.555998 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.556003 | orchestrator | 2025-07-12 13:52:06.556008 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-07-12 13:52:06.556013 | orchestrator | Saturday 12 July 2025 13:51:26 +0000 (0:00:00.697) 0:05:41.713 ********* 2025-07-12 13:52:06.556017 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.556022 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.556027 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.556032 | orchestrator | 2025-07-12 13:52:06.556037 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-07-12 13:52:06.556042 | orchestrator | Saturday 12 July 2025 13:51:26 +0000 (0:00:00.339) 0:05:42.053 ********* 2025-07-12 13:52:06.556047 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.556052 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.556056 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.556061 | orchestrator | 2025-07-12 13:52:06.556066 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-07-12 13:52:06.556074 | orchestrator | Saturday 12 July 2025 13:51:27 +0000 (0:00:01.272) 0:05:43.325 ********* 2025-07-12 13:52:06.556079 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.556084 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.556089 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.556094 | orchestrator | 2025-07-12 13:52:06.556099 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-07-12 13:52:06.556104 | orchestrator | Saturday 12 July 2025 13:51:28 +0000 (0:00:00.942) 0:05:44.268 ********* 2025-07-12 13:52:06.556111 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.556116 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.556121 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.556126 | orchestrator | 2025-07-12 13:52:06.556131 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-07-12 13:52:06.556136 | orchestrator | Saturday 12 July 2025 13:51:29 +0000 (0:00:00.984) 0:05:45.252 ********* 2025-07-12 13:52:06.556141 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.556146 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.556151 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.556156 | orchestrator | 2025-07-12 13:52:06.556161 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-07-12 13:52:06.556165 | orchestrator | Saturday 12 July 2025 13:51:34 +0000 (0:00:04.741) 0:05:49.994 ********* 2025-07-12 13:52:06.556170 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.556175 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.556180 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.556185 | orchestrator | 2025-07-12 13:52:06.556190 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-07-12 13:52:06.556195 | orchestrator | Saturday 12 July 2025 13:51:38 +0000 (0:00:03.745) 0:05:53.740 ********* 2025-07-12 13:52:06.556199 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.556204 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.556209 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.556214 | orchestrator | 2025-07-12 13:52:06.556219 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-07-12 13:52:06.556224 | orchestrator | Saturday 12 July 2025 13:51:46 +0000 (0:00:08.680) 0:06:02.420 ********* 2025-07-12 13:52:06.556229 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.556234 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.556239 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.556244 | orchestrator | 2025-07-12 13:52:06.556249 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-07-12 13:52:06.556253 | orchestrator | Saturday 12 July 2025 13:51:50 +0000 (0:00:03.792) 0:06:06.213 ********* 2025-07-12 13:52:06.556258 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:52:06.556263 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:52:06.556268 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:52:06.556273 | orchestrator | 2025-07-12 13:52:06.556278 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-07-12 13:52:06.556283 | orchestrator | Saturday 12 July 2025 13:51:55 +0000 (0:00:04.429) 0:06:10.643 ********* 2025-07-12 13:52:06.556287 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.556292 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.556297 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.556302 | orchestrator | 2025-07-12 13:52:06.556307 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-07-12 13:52:06.556312 | orchestrator | Saturday 12 July 2025 13:51:55 +0000 (0:00:00.356) 0:06:10.999 ********* 2025-07-12 13:52:06.556317 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.556324 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.556329 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.556334 | orchestrator | 2025-07-12 13:52:06.556339 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-07-12 13:52:06.556344 | orchestrator | Saturday 12 July 2025 13:51:56 +0000 (0:00:00.932) 0:06:11.931 ********* 2025-07-12 13:52:06.556349 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.556354 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.556359 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.556363 | orchestrator | 2025-07-12 13:52:06.556368 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-07-12 13:52:06.556373 | orchestrator | Saturday 12 July 2025 13:51:56 +0000 (0:00:00.405) 0:06:12.337 ********* 2025-07-12 13:52:06.556378 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.556386 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.556391 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.556396 | orchestrator | 2025-07-12 13:52:06.556401 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-07-12 13:52:06.556406 | orchestrator | Saturday 12 July 2025 13:51:57 +0000 (0:00:00.400) 0:06:12.738 ********* 2025-07-12 13:52:06.556410 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.556415 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.556420 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.556425 | orchestrator | 2025-07-12 13:52:06.556430 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-07-12 13:52:06.556435 | orchestrator | Saturday 12 July 2025 13:51:57 +0000 (0:00:00.387) 0:06:13.125 ********* 2025-07-12 13:52:06.556440 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:52:06.556445 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:52:06.556450 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:52:06.556454 | orchestrator | 2025-07-12 13:52:06.556459 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-07-12 13:52:06.556464 | orchestrator | Saturday 12 July 2025 13:51:58 +0000 (0:00:00.712) 0:06:13.838 ********* 2025-07-12 13:52:06.556469 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.556474 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.556479 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.556484 | orchestrator | 2025-07-12 13:52:06.556489 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-07-12 13:52:06.556494 | orchestrator | Saturday 12 July 2025 13:52:03 +0000 (0:00:04.941) 0:06:18.780 ********* 2025-07-12 13:52:06.556499 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:52:06.556504 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:52:06.556508 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:52:06.556513 | orchestrator | 2025-07-12 13:52:06.556518 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:52:06.556526 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 13:52:06.556542 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 13:52:06.556547 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 13:52:06.556552 | orchestrator | 2025-07-12 13:52:06.556557 | orchestrator | 2025-07-12 13:52:06.556562 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:52:06.556567 | orchestrator | Saturday 12 July 2025 13:52:03 +0000 (0:00:00.799) 0:06:19.580 ********* 2025-07-12 13:52:06.556572 | orchestrator | =============================================================================== 2025-07-12 13:52:06.556577 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.68s 2025-07-12 13:52:06.556582 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.00s 2025-07-12 13:52:06.556586 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.53s 2025-07-12 13:52:06.556591 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.44s 2025-07-12 13:52:06.556596 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.94s 2025-07-12 13:52:06.556601 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.81s 2025-07-12 13:52:06.556606 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.74s 2025-07-12 13:52:06.556611 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.43s 2025-07-12 13:52:06.556616 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.36s 2025-07-12 13:52:06.556621 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.36s 2025-07-12 13:52:06.556629 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.27s 2025-07-12 13:52:06.556634 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.23s 2025-07-12 13:52:06.556639 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.20s 2025-07-12 13:52:06.556643 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.17s 2025-07-12 13:52:06.556648 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.15s 2025-07-12 13:52:06.556653 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.11s 2025-07-12 13:52:06.556658 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.09s 2025-07-12 13:52:06.556663 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.86s 2025-07-12 13:52:06.556668 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.86s 2025-07-12 13:52:06.556681 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.79s 2025-07-12 13:52:09.605084 | orchestrator | 2025-07-12 13:52:09 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:09.606243 | orchestrator | 2025-07-12 13:52:09 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:09.608167 | orchestrator | 2025-07-12 13:52:09 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:09.608194 | orchestrator | 2025-07-12 13:52:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:12.647498 | orchestrator | 2025-07-12 13:52:12 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:12.648800 | orchestrator | 2025-07-12 13:52:12 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:12.648842 | orchestrator | 2025-07-12 13:52:12 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:12.648856 | orchestrator | 2025-07-12 13:52:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:15.683101 | orchestrator | 2025-07-12 13:52:15 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:15.683485 | orchestrator | 2025-07-12 13:52:15 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:15.684355 | orchestrator | 2025-07-12 13:52:15 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:15.684710 | orchestrator | 2025-07-12 13:52:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:18.712871 | orchestrator | 2025-07-12 13:52:18 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:18.715154 | orchestrator | 2025-07-12 13:52:18 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:18.715939 | orchestrator | 2025-07-12 13:52:18 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:18.716230 | orchestrator | 2025-07-12 13:52:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:21.753840 | orchestrator | 2025-07-12 13:52:21 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:21.754216 | orchestrator | 2025-07-12 13:52:21 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:21.754847 | orchestrator | 2025-07-12 13:52:21 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:21.755005 | orchestrator | 2025-07-12 13:52:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:24.801478 | orchestrator | 2025-07-12 13:52:24 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:24.803810 | orchestrator | 2025-07-12 13:52:24 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:24.804462 | orchestrator | 2025-07-12 13:52:24 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:24.804843 | orchestrator | 2025-07-12 13:52:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:27.847309 | orchestrator | 2025-07-12 13:52:27 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:27.848209 | orchestrator | 2025-07-12 13:52:27 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:27.851762 | orchestrator | 2025-07-12 13:52:27 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:27.851791 | orchestrator | 2025-07-12 13:52:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:30.892948 | orchestrator | 2025-07-12 13:52:30 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:30.896141 | orchestrator | 2025-07-12 13:52:30 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:30.896632 | orchestrator | 2025-07-12 13:52:30 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:30.896897 | orchestrator | 2025-07-12 13:52:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:33.931943 | orchestrator | 2025-07-12 13:52:33 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:33.934176 | orchestrator | 2025-07-12 13:52:33 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:33.934903 | orchestrator | 2025-07-12 13:52:33 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:33.935038 | orchestrator | 2025-07-12 13:52:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:36.976371 | orchestrator | 2025-07-12 13:52:36 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:36.977165 | orchestrator | 2025-07-12 13:52:36 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:36.978681 | orchestrator | 2025-07-12 13:52:36 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:36.979052 | orchestrator | 2025-07-12 13:52:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:40.025070 | orchestrator | 2025-07-12 13:52:40 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:40.025733 | orchestrator | 2025-07-12 13:52:40 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:40.026664 | orchestrator | 2025-07-12 13:52:40 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:40.026907 | orchestrator | 2025-07-12 13:52:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:43.090606 | orchestrator | 2025-07-12 13:52:43 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:43.092598 | orchestrator | 2025-07-12 13:52:43 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:43.092643 | orchestrator | 2025-07-12 13:52:43 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:43.092656 | orchestrator | 2025-07-12 13:52:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:46.142994 | orchestrator | 2025-07-12 13:52:46 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:46.149140 | orchestrator | 2025-07-12 13:52:46 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:46.150684 | orchestrator | 2025-07-12 13:52:46 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:46.150942 | orchestrator | 2025-07-12 13:52:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:49.195275 | orchestrator | 2025-07-12 13:52:49 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:49.197748 | orchestrator | 2025-07-12 13:52:49 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:49.199688 | orchestrator | 2025-07-12 13:52:49 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:49.200248 | orchestrator | 2025-07-12 13:52:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:52.246007 | orchestrator | 2025-07-12 13:52:52 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:52.246720 | orchestrator | 2025-07-12 13:52:52 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:52.247771 | orchestrator | 2025-07-12 13:52:52 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:52.247796 | orchestrator | 2025-07-12 13:52:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:55.285154 | orchestrator | 2025-07-12 13:52:55 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:55.287233 | orchestrator | 2025-07-12 13:52:55 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:55.289497 | orchestrator | 2025-07-12 13:52:55 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:55.289816 | orchestrator | 2025-07-12 13:52:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:52:58.347259 | orchestrator | 2025-07-12 13:52:58 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:52:58.348119 | orchestrator | 2025-07-12 13:52:58 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:52:58.349239 | orchestrator | 2025-07-12 13:52:58 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:52:58.349279 | orchestrator | 2025-07-12 13:52:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:01.398794 | orchestrator | 2025-07-12 13:53:01 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:01.398896 | orchestrator | 2025-07-12 13:53:01 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:01.400031 | orchestrator | 2025-07-12 13:53:01 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:01.400073 | orchestrator | 2025-07-12 13:53:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:04.443757 | orchestrator | 2025-07-12 13:53:04 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:04.445661 | orchestrator | 2025-07-12 13:53:04 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:04.449707 | orchestrator | 2025-07-12 13:53:04 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:04.450665 | orchestrator | 2025-07-12 13:53:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:07.493328 | orchestrator | 2025-07-12 13:53:07 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:07.494479 | orchestrator | 2025-07-12 13:53:07 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:07.495991 | orchestrator | 2025-07-12 13:53:07 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:07.496092 | orchestrator | 2025-07-12 13:53:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:10.537445 | orchestrator | 2025-07-12 13:53:10 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:10.537937 | orchestrator | 2025-07-12 13:53:10 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:10.540164 | orchestrator | 2025-07-12 13:53:10 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:10.540255 | orchestrator | 2025-07-12 13:53:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:13.602913 | orchestrator | 2025-07-12 13:53:13 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:13.604699 | orchestrator | 2025-07-12 13:53:13 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:13.606145 | orchestrator | 2025-07-12 13:53:13 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:13.606591 | orchestrator | 2025-07-12 13:53:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:16.663413 | orchestrator | 2025-07-12 13:53:16 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:16.665122 | orchestrator | 2025-07-12 13:53:16 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:16.666972 | orchestrator | 2025-07-12 13:53:16 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:16.667079 | orchestrator | 2025-07-12 13:53:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:19.728919 | orchestrator | 2025-07-12 13:53:19 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:19.732027 | orchestrator | 2025-07-12 13:53:19 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:19.733582 | orchestrator | 2025-07-12 13:53:19 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:19.733930 | orchestrator | 2025-07-12 13:53:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:22.780161 | orchestrator | 2025-07-12 13:53:22 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:22.781795 | orchestrator | 2025-07-12 13:53:22 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:22.784938 | orchestrator | 2025-07-12 13:53:22 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:22.784972 | orchestrator | 2025-07-12 13:53:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:25.824119 | orchestrator | 2025-07-12 13:53:25 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:25.826004 | orchestrator | 2025-07-12 13:53:25 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:25.826770 | orchestrator | 2025-07-12 13:53:25 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:25.826812 | orchestrator | 2025-07-12 13:53:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:28.877320 | orchestrator | 2025-07-12 13:53:28 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:28.877993 | orchestrator | 2025-07-12 13:53:28 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:28.879016 | orchestrator | 2025-07-12 13:53:28 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:28.879096 | orchestrator | 2025-07-12 13:53:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:31.937387 | orchestrator | 2025-07-12 13:53:31 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:31.938859 | orchestrator | 2025-07-12 13:53:31 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:31.940982 | orchestrator | 2025-07-12 13:53:31 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:31.941417 | orchestrator | 2025-07-12 13:53:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:34.999574 | orchestrator | 2025-07-12 13:53:34 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:35.001227 | orchestrator | 2025-07-12 13:53:34 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:35.003542 | orchestrator | 2025-07-12 13:53:35 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:35.003627 | orchestrator | 2025-07-12 13:53:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:38.070109 | orchestrator | 2025-07-12 13:53:38 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:38.071923 | orchestrator | 2025-07-12 13:53:38 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:38.073779 | orchestrator | 2025-07-12 13:53:38 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:38.074341 | orchestrator | 2025-07-12 13:53:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:41.129792 | orchestrator | 2025-07-12 13:53:41 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:41.130860 | orchestrator | 2025-07-12 13:53:41 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:41.132793 | orchestrator | 2025-07-12 13:53:41 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:41.132871 | orchestrator | 2025-07-12 13:53:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:44.190389 | orchestrator | 2025-07-12 13:53:44 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:44.193715 | orchestrator | 2025-07-12 13:53:44 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:44.193751 | orchestrator | 2025-07-12 13:53:44 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:44.193764 | orchestrator | 2025-07-12 13:53:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:47.240195 | orchestrator | 2025-07-12 13:53:47 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:47.242391 | orchestrator | 2025-07-12 13:53:47 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:47.244209 | orchestrator | 2025-07-12 13:53:47 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:47.244241 | orchestrator | 2025-07-12 13:53:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:50.285156 | orchestrator | 2025-07-12 13:53:50 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:50.286397 | orchestrator | 2025-07-12 13:53:50 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:50.287310 | orchestrator | 2025-07-12 13:53:50 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:50.287334 | orchestrator | 2025-07-12 13:53:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:53.334289 | orchestrator | 2025-07-12 13:53:53 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:53.335880 | orchestrator | 2025-07-12 13:53:53 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:53.338153 | orchestrator | 2025-07-12 13:53:53 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:53.338256 | orchestrator | 2025-07-12 13:53:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:56.381224 | orchestrator | 2025-07-12 13:53:56 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:56.382012 | orchestrator | 2025-07-12 13:53:56 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:56.383421 | orchestrator | 2025-07-12 13:53:56 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:56.383492 | orchestrator | 2025-07-12 13:53:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:53:59.442917 | orchestrator | 2025-07-12 13:53:59 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:53:59.445341 | orchestrator | 2025-07-12 13:53:59 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:53:59.448085 | orchestrator | 2025-07-12 13:53:59 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:53:59.448188 | orchestrator | 2025-07-12 13:53:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:02.502360 | orchestrator | 2025-07-12 13:54:02 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:02.503798 | orchestrator | 2025-07-12 13:54:02 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:02.506073 | orchestrator | 2025-07-12 13:54:02 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:02.506167 | orchestrator | 2025-07-12 13:54:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:05.562868 | orchestrator | 2025-07-12 13:54:05 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:05.564771 | orchestrator | 2025-07-12 13:54:05 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:05.568893 | orchestrator | 2025-07-12 13:54:05 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:05.568984 | orchestrator | 2025-07-12 13:54:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:08.619938 | orchestrator | 2025-07-12 13:54:08 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:08.620819 | orchestrator | 2025-07-12 13:54:08 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:08.622825 | orchestrator | 2025-07-12 13:54:08 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:08.622931 | orchestrator | 2025-07-12 13:54:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:11.684288 | orchestrator | 2025-07-12 13:54:11 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:11.686525 | orchestrator | 2025-07-12 13:54:11 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:11.689095 | orchestrator | 2025-07-12 13:54:11 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:11.689663 | orchestrator | 2025-07-12 13:54:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:14.742611 | orchestrator | 2025-07-12 13:54:14 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:14.742708 | orchestrator | 2025-07-12 13:54:14 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:14.743175 | orchestrator | 2025-07-12 13:54:14 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:14.743200 | orchestrator | 2025-07-12 13:54:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:17.789859 | orchestrator | 2025-07-12 13:54:17 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:17.790123 | orchestrator | 2025-07-12 13:54:17 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:17.790894 | orchestrator | 2025-07-12 13:54:17 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:17.790920 | orchestrator | 2025-07-12 13:54:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:20.844555 | orchestrator | 2025-07-12 13:54:20 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:20.846897 | orchestrator | 2025-07-12 13:54:20 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:20.849643 | orchestrator | 2025-07-12 13:54:20 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:20.849807 | orchestrator | 2025-07-12 13:54:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:23.892805 | orchestrator | 2025-07-12 13:54:23 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:23.894533 | orchestrator | 2025-07-12 13:54:23 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:23.896061 | orchestrator | 2025-07-12 13:54:23 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:23.896347 | orchestrator | 2025-07-12 13:54:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:26.951566 | orchestrator | 2025-07-12 13:54:26 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:26.952251 | orchestrator | 2025-07-12 13:54:26 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:26.953551 | orchestrator | 2025-07-12 13:54:26 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:26.953745 | orchestrator | 2025-07-12 13:54:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:30.001889 | orchestrator | 2025-07-12 13:54:29 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:30.007637 | orchestrator | 2025-07-12 13:54:30 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:30.009604 | orchestrator | 2025-07-12 13:54:30 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state STARTED 2025-07-12 13:54:30.009630 | orchestrator | 2025-07-12 13:54:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:33.057750 | orchestrator | 2025-07-12 13:54:33 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:33.059989 | orchestrator | 2025-07-12 13:54:33 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:33.067631 | orchestrator | 2025-07-12 13:54:33 | INFO  | Task a73d26f2-4c22-4c37-b617-61cd2b7277eb is in state SUCCESS 2025-07-12 13:54:33.070421 | orchestrator | 2025-07-12 13:54:33.070485 | orchestrator | 2025-07-12 13:54:33.070498 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-07-12 13:54:33.070511 | orchestrator | 2025-07-12 13:54:33.070522 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 13:54:33.070562 | orchestrator | Saturday 12 July 2025 13:42:42 +0000 (0:00:00.774) 0:00:00.774 ********* 2025-07-12 13:54:33.070575 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.070587 | orchestrator | 2025-07-12 13:54:33.070598 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 13:54:33.070609 | orchestrator | Saturday 12 July 2025 13:42:43 +0000 (0:00:00.959) 0:00:01.733 ********* 2025-07-12 13:54:33.070620 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.070632 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.070643 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.070654 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.070665 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.070676 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.070687 | orchestrator | 2025-07-12 13:54:33.070698 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 13:54:33.070709 | orchestrator | Saturday 12 July 2025 13:42:45 +0000 (0:00:01.707) 0:00:03.441 ********* 2025-07-12 13:54:33.070719 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.070730 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.070741 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.070752 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.070763 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.070774 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.070784 | orchestrator | 2025-07-12 13:54:33.070795 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 13:54:33.070847 | orchestrator | Saturday 12 July 2025 13:42:46 +0000 (0:00:00.873) 0:00:04.315 ********* 2025-07-12 13:54:33.070860 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.070870 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.070881 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.070892 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.070902 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.070913 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.070924 | orchestrator | 2025-07-12 13:54:33.070935 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 13:54:33.070946 | orchestrator | Saturday 12 July 2025 13:42:47 +0000 (0:00:00.940) 0:00:05.255 ********* 2025-07-12 13:54:33.070958 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.070968 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.070979 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.070990 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.071001 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.071012 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.071025 | orchestrator | 2025-07-12 13:54:33.071038 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 13:54:33.071050 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:00.812) 0:00:06.068 ********* 2025-07-12 13:54:33.071063 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.071076 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.071088 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.071101 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.071221 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.071234 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.071246 | orchestrator | 2025-07-12 13:54:33.071258 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 13:54:33.071271 | orchestrator | Saturday 12 July 2025 13:42:48 +0000 (0:00:00.657) 0:00:06.725 ********* 2025-07-12 13:54:33.071284 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.071296 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.071309 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.071322 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.071335 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.071347 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.071368 | orchestrator | 2025-07-12 13:54:33.071379 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 13:54:33.071391 | orchestrator | Saturday 12 July 2025 13:42:49 +0000 (0:00:00.874) 0:00:07.600 ********* 2025-07-12 13:54:33.071401 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.071427 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.071456 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.071468 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.071478 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.071489 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.071500 | orchestrator | 2025-07-12 13:54:33.071511 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 13:54:33.071522 | orchestrator | Saturday 12 July 2025 13:42:50 +0000 (0:00:01.086) 0:00:08.687 ********* 2025-07-12 13:54:33.071533 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.071544 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.071555 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.071566 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.071577 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.071588 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.071598 | orchestrator | 2025-07-12 13:54:33.071610 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 13:54:33.071621 | orchestrator | Saturday 12 July 2025 13:42:51 +0000 (0:00:00.929) 0:00:09.616 ********* 2025-07-12 13:54:33.071632 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:33.071643 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:33.071654 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:33.071665 | orchestrator | 2025-07-12 13:54:33.071676 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 13:54:33.071687 | orchestrator | Saturday 12 July 2025 13:42:52 +0000 (0:00:00.953) 0:00:10.569 ********* 2025-07-12 13:54:33.071698 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.071708 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.071719 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.071730 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.071741 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.071752 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.071763 | orchestrator | 2025-07-12 13:54:33.071788 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 13:54:33.071800 | orchestrator | Saturday 12 July 2025 13:42:53 +0000 (0:00:01.107) 0:00:11.677 ********* 2025-07-12 13:54:33.071811 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:33.071877 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:33.071888 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:33.071899 | orchestrator | 2025-07-12 13:54:33.071910 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 13:54:33.071921 | orchestrator | Saturday 12 July 2025 13:42:56 +0000 (0:00:03.027) 0:00:14.704 ********* 2025-07-12 13:54:33.071992 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:33.072005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:33.072017 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:33.072028 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.072039 | orchestrator | 2025-07-12 13:54:33.072049 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 13:54:33.072060 | orchestrator | Saturday 12 July 2025 13:42:57 +0000 (0:00:00.825) 0:00:15.529 ********* 2025-07-12 13:54:33.072074 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.072096 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.072108 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.072119 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.072130 | orchestrator | 2025-07-12 13:54:33.072141 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 13:54:33.072152 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:01.352) 0:00:16.882 ********* 2025-07-12 13:54:33.072165 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.072179 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.072196 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.072208 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.072219 | orchestrator | 2025-07-12 13:54:33.072230 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 13:54:33.072241 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:00.558) 0:00:17.440 ********* 2025-07-12 13:54:33.072255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 13:42:54.411048', 'end': '2025-07-12 13:42:54.699974', 'delta': '0:00:00.288926', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.072278 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 13:42:55.463510', 'end': '2025-07-12 13:42:55.730194', 'delta': '0:00:00.266684', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.072298 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 13:42:56.378798', 'end': '2025-07-12 13:42:56.660236', 'delta': '0:00:00.281438', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.072310 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.072321 | orchestrator | 2025-07-12 13:54:33.072332 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 13:54:33.072343 | orchestrator | Saturday 12 July 2025 13:42:59 +0000 (0:00:00.259) 0:00:17.700 ********* 2025-07-12 13:54:33.072353 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.072364 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.072376 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.072386 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.072397 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.072437 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.072484 | orchestrator | 2025-07-12 13:54:33.072495 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 13:54:33.072506 | orchestrator | Saturday 12 July 2025 13:43:01 +0000 (0:00:01.541) 0:00:19.245 ********* 2025-07-12 13:54:33.072551 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.072596 | orchestrator | 2025-07-12 13:54:33.072608 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 13:54:33.072619 | orchestrator | Saturday 12 July 2025 13:43:02 +0000 (0:00:00.832) 0:00:20.078 ********* 2025-07-12 13:54:33.072630 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.072641 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.072652 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.072663 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.072756 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.072769 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.072780 | orchestrator | 2025-07-12 13:54:33.072791 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 13:54:33.072802 | orchestrator | Saturday 12 July 2025 13:43:03 +0000 (0:00:01.152) 0:00:21.231 ********* 2025-07-12 13:54:33.072813 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.072824 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.072835 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.072845 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.072856 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.072867 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.072878 | orchestrator | 2025-07-12 13:54:33.072889 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 13:54:33.072900 | orchestrator | Saturday 12 July 2025 13:43:04 +0000 (0:00:01.275) 0:00:22.506 ********* 2025-07-12 13:54:33.072911 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.072922 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.072933 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.072943 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.072954 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.072965 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.072976 | orchestrator | 2025-07-12 13:54:33.072987 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 13:54:33.072998 | orchestrator | Saturday 12 July 2025 13:43:05 +0000 (0:00:00.935) 0:00:23.442 ********* 2025-07-12 13:54:33.073008 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.073019 | orchestrator | 2025-07-12 13:54:33.073038 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 13:54:33.073049 | orchestrator | Saturday 12 July 2025 13:43:05 +0000 (0:00:00.125) 0:00:23.568 ********* 2025-07-12 13:54:33.073059 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.073070 | orchestrator | 2025-07-12 13:54:33.073081 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 13:54:33.073092 | orchestrator | Saturday 12 July 2025 13:43:05 +0000 (0:00:00.266) 0:00:23.834 ********* 2025-07-12 13:54:33.073103 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.073114 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.073124 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.073135 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.073146 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.073157 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.073167 | orchestrator | 2025-07-12 13:54:33.073178 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 13:54:33.073196 | orchestrator | Saturday 12 July 2025 13:43:06 +0000 (0:00:00.625) 0:00:24.459 ********* 2025-07-12 13:54:33.073281 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.073294 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.073305 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.073316 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.073358 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.073370 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.073380 | orchestrator | 2025-07-12 13:54:33.073391 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 13:54:33.073402 | orchestrator | Saturday 12 July 2025 13:43:07 +0000 (0:00:01.081) 0:00:25.541 ********* 2025-07-12 13:54:33.073413 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.073424 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.073435 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.073504 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.073516 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.073527 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.073594 | orchestrator | 2025-07-12 13:54:33.073605 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 13:54:33.073616 | orchestrator | Saturday 12 July 2025 13:43:08 +0000 (0:00:00.835) 0:00:26.376 ********* 2025-07-12 13:54:33.073627 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.073638 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.073649 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.073660 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.073671 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.073682 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.073693 | orchestrator | 2025-07-12 13:54:33.073704 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 13:54:33.073715 | orchestrator | Saturday 12 July 2025 13:43:09 +0000 (0:00:00.847) 0:00:27.223 ********* 2025-07-12 13:54:33.073726 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.073737 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.073748 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.073758 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.073769 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.073780 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.073791 | orchestrator | 2025-07-12 13:54:33.073801 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 13:54:33.073812 | orchestrator | Saturday 12 July 2025 13:43:09 +0000 (0:00:00.544) 0:00:27.768 ********* 2025-07-12 13:54:33.073823 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.073865 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.073877 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.073888 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.073899 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.073919 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.073930 | orchestrator | 2025-07-12 13:54:33.073941 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 13:54:33.073952 | orchestrator | Saturday 12 July 2025 13:43:10 +0000 (0:00:00.682) 0:00:28.451 ********* 2025-07-12 13:54:33.074003 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.074257 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.074286 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.074296 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.074306 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.074316 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.074325 | orchestrator | 2025-07-12 13:54:33.074335 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 13:54:33.074345 | orchestrator | Saturday 12 July 2025 13:43:11 +0000 (0:00:00.581) 0:00:29.032 ********* 2025-07-12 13:54:33.074361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.074593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.074750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074764 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.074774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part1', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part14', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part15', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part16', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.074846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.074859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.074972 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.074982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part1', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part14', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part15', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part16', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.075691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.075784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09698b4c--8482--58a0--ad33--d3500ef3a9f7-osd--block--09698b4c--8482--58a0--ad33--d3500ef3a9f7', 'dm-uuid-LVM-4MF8FKekAfibsfbuuKjfJMplsjoYqjph0Xzt3sPd98YeKxR2QYYiQusPioenEqOL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f35471dc--23d0--5222--b540--93882fae0f69-osd--block--f35471dc--23d0--5222--b540--93882fae0f69', 'dm-uuid-LVM-bfTaqVa88Rh4Nequz5jEWqhv8Td4ZmNEk6j5EzVds24XoTOrltYM7dL2Lhbdua3t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075917 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.075947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.075992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part1', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part14', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part15', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part16', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--09698b4c--8482--58a0--ad33--d3500ef3a9f7-osd--block--09698b4c--8482--58a0--ad33--d3500ef3a9f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7JkHHe-bttj-aJwz-4iXT-Ljd7-kKVl-eKVWMP', 'scsi-0QEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830', 'scsi-SQEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f35471dc--23d0--5222--b540--93882fae0f69-osd--block--f35471dc--23d0--5222--b540--93882fae0f69'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2txnhq-Fhyu-kyj7-iRya-mECk-ZjRq-xPZGdV', 'scsi-0QEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767', 'scsi-SQEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac', 'scsi-SQEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f88c8806--82e1--5c41--a829--e62dc4a8fdb6-osd--block--f88c8806--82e1--5c41--a829--e62dc4a8fdb6', 'dm-uuid-LVM-PL6sVvcXnMQc2eiNHfOUI24TaeNmZfwUZwuYEvpVd1ZPqfmSI02R1EW4iawYgKm3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbedf305--2fae--5605--926c--96a21a5245d1-osd--block--fbedf305--2fae--5605--926c--96a21a5245d1', 'dm-uuid-LVM-RL6JixEV7A5I01cMNuWGdtUMze3uy7fwReT9hUfFvSByD1xD02QmFCPfrfxrr2bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076170 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.076182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f88c8806--82e1--5c41--a829--e62dc4a8fdb6-osd--block--f88c8806--82e1--5c41--a829--e62dc4a8fdb6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C0wuZN-oRBc-0l8h-zfMZ-pRfR-pgPn-zQO3yO', 'scsi-0QEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98', 'scsi-SQEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fbedf305--2fae--5605--926c--96a21a5245d1-osd--block--fbedf305--2fae--5605--926c--96a21a5245d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NvbKyO-TkVx-bBB8-gBSa-V1TF-r7kw-A91xhV', 'scsi-0QEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa', 'scsi-SQEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2177925c--0e94--5467--9f04--b37733dbe47a-osd--block--2177925c--0e94--5467--9f04--b37733dbe47a', 'dm-uuid-LVM-HhJf71qEjqPRC94IO3h96dIc0QoGrWborFvpLuXK7q9owoecVv6ZEWdnKpmoS0BU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1', 'scsi-SQEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10b3d195--009d--5006--b5f6--1b7aa1316d97-osd--block--10b3d195--009d--5006--b5f6--1b7aa1316d97', 'dm-uuid-LVM-G4QXe0RydoR02C1cjl3dfZHdcG2JRzgBfeSFfktNF4Pd0AIxdth5Rk39VfMqiFDg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076331 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.076350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:54:33.076490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2177925c--0e94--5467--9f04--b37733dbe47a-osd--block--2177925c--0e94--5467--9f04--b37733dbe47a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i613m1-lBdX-HvMf-f2aJ-l1zY-Nwc5-iTWCrE', 'scsi-0QEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094', 'scsi-SQEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--10b3d195--009d--5006--b5f6--1b7aa1316d97-osd--block--10b3d195--009d--5006--b5f6--1b7aa1316d97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gKK1g8-CQNw-FbmJ-foMT-xxHz-dmhJ-1Q4lcD', 'scsi-0QEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f', 'scsi-SQEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29', 'scsi-SQEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:54:33.076568 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.076580 | orchestrator | 2025-07-12 13:54:33.076592 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 13:54:33.076604 | orchestrator | Saturday 12 July 2025 13:43:12 +0000 (0:00:01.780) 0:00:30.812 ********* 2025-07-12 13:54:33.076616 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076628 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076640 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076651 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076668 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076685 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076704 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076716 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076734 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9fe12d2-947e-4f68-8277-3ed645ecdab1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076762 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076774 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076786 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076798 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076809 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076825 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076843 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076861 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076873 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076884 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.076902 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part1', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part14', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part15', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part16', 'scsi-SQEMU_QEMU_HARDDISK_aa156cef-0214-4f9c-bceb-63dc1a9b9f72-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076921 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076940 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076952 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076964 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076975 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.076997 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077009 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077026 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077038 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077056 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part1', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part14', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part15', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part16', 'scsi-SQEMU_QEMU_HARDDISK_a40c6dc4-39fa-427b-93ef-c33f20a62f22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077075 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077086 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.077104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09698b4c--8482--58a0--ad33--d3500ef3a9f7-osd--block--09698b4c--8482--58a0--ad33--d3500ef3a9f7', 'dm-uuid-LVM-4MF8FKekAfibsfbuuKjfJMplsjoYqjph0Xzt3sPd98YeKxR2QYYiQusPioenEqOL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077117 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f35471dc--23d0--5222--b540--93882fae0f69-osd--block--f35471dc--23d0--5222--b540--93882fae0f69', 'dm-uuid-LVM-bfTaqVa88Rh4Nequz5jEWqhv8Td4ZmNEk6j5EzVds24XoTOrltYM7dL2Lhbdua3t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077128 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.077140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077151 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077214 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f88c8806--82e1--5c41--a829--e62dc4a8fdb6-osd--block--f88c8806--82e1--5c41--a829--e62dc4a8fdb6', 'dm-uuid-LVM-PL6sVvcXnMQc2eiNHfOUI24TaeNmZfwUZwuYEvpVd1ZPqfmSI02R1EW4iawYgKm3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077226 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbedf305--2fae--5605--926c--96a21a5245d1-osd--block--fbedf305--2fae--5605--926c--96a21a5245d1', 'dm-uuid-LVM-RL6JixEV7A5I01cMNuWGdtUMze3uy7fwReT9hUfFvSByD1xD02QmFCPfrfxrr2bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077260 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077301 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077325 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.077342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part1', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part14', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part15', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part16', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078140 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078153 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2177925c--0e94--5467--9f04--b37733dbe47a-osd--block--2177925c--0e94--5467--9f04--b37733dbe47a', 'dm-uuid-LVM-HhJf71qEjqPRC94IO3h96dIc0QoGrWborFvpLuXK7q9owoecVv6ZEWdnKpmoS0BU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078184 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--09698b4c--8482--58a0--ad33--d3500ef3a9f7-osd--block--09698b4c--8482--58a0--ad33--d3500ef3a9f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7JkHHe-bttj-aJwz-4iXT-Ljd7-kKVl-eKVWMP', 'scsi-0QEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830', 'scsi-SQEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10b3d195--009d--5006--b5f6--1b7aa1316d97-osd--block--10b3d195--009d--5006--b5f6--1b7aa1316d97', 'dm-uuid-LVM-G4QXe0RydoR02C1cjl3dfZHdcG2JRzgBfeSFfktNF4Pd0AIxdth5Rk39VfMqiFDg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f35471dc--23d0--5222--b540--93882fae0f69-osd--block--f35471dc--23d0--5222--b540--93882fae0f69'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2txnhq-Fhyu-kyj7-iRya-mECk-ZjRq-xPZGdV', 'scsi-0QEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767', 'scsi-SQEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078273 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078287 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078298 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac', 'scsi-SQEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078316 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f88c8806--82e1--5c41--a829--e62dc4a8fdb6-osd--block--f88c8806--82e1--5c41--a829--e62dc4a8fdb6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C0wuZN-oRBc-0l8h-zfMZ-pRfR-pgPn-zQO3yO', 'scsi-0QEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98', 'scsi-SQEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078332 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078350 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fbedf305--2fae--5605--926c--96a21a5245d1-osd--block--fbedf305--2fae--5605--926c--96a21a5245d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NvbKyO-TkVx-bBB8-gBSa-V1TF-r7kw-A91xhV', 'scsi-0QEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa', 'scsi-SQEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078374 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.078385 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078403 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1', 'scsi-SQEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078420 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078432 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078483 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.078494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078506 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078524 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078548 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2177925c--0e94--5467--9f04--b37733dbe47a-osd--block--2177925c--0e94--5467--9f04--b37733dbe47a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i613m1-lBdX-HvMf-f2aJ-l1zY-Nwc5-iTWCrE', 'scsi-0QEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094', 'scsi-SQEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078579 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--10b3d195--009d--5006--b5f6--1b7aa1316d97-osd--block--10b3d195--009d--5006--b5f6--1b7aa1316d97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gKK1g8-CQNw-FbmJ-foMT-xxHz-dmhJ-1Q4lcD', 'scsi-0QEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f', 'scsi-SQEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078595 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29', 'scsi-SQEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078607 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:54:33.078619 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.078630 | orchestrator | 2025-07-12 13:54:33.078642 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 13:54:33.078655 | orchestrator | Saturday 12 July 2025 13:43:13 +0000 (0:00:00.878) 0:00:31.690 ********* 2025-07-12 13:54:33.078668 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.078681 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.078693 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.078711 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.078724 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.078736 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.078748 | orchestrator | 2025-07-12 13:54:33.078762 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 13:54:33.078774 | orchestrator | Saturday 12 July 2025 13:43:15 +0000 (0:00:01.269) 0:00:32.959 ********* 2025-07-12 13:54:33.078787 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.078799 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.078817 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.078829 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.078842 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.078855 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.078868 | orchestrator | 2025-07-12 13:54:33.078881 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 13:54:33.078893 | orchestrator | Saturday 12 July 2025 13:43:15 +0000 (0:00:00.875) 0:00:33.835 ********* 2025-07-12 13:54:33.078906 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.078919 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.078931 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.078944 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.078956 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.078969 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.078981 | orchestrator | 2025-07-12 13:54:33.078994 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 13:54:33.079006 | orchestrator | Saturday 12 July 2025 13:43:16 +0000 (0:00:00.680) 0:00:34.515 ********* 2025-07-12 13:54:33.079016 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.079027 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.079038 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.079049 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.079060 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.079071 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.079082 | orchestrator | 2025-07-12 13:54:33.079093 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 13:54:33.079104 | orchestrator | Saturday 12 July 2025 13:43:17 +0000 (0:00:00.506) 0:00:35.022 ********* 2025-07-12 13:54:33.079115 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.079125 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.079136 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.079147 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.079158 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.079168 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.079179 | orchestrator | 2025-07-12 13:54:33.079190 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 13:54:33.079201 | orchestrator | Saturday 12 July 2025 13:43:18 +0000 (0:00:01.085) 0:00:36.107 ********* 2025-07-12 13:54:33.079212 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.079223 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.079234 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.079245 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.079256 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.079267 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.079278 | orchestrator | 2025-07-12 13:54:33.079289 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 13:54:33.079300 | orchestrator | Saturday 12 July 2025 13:43:18 +0000 (0:00:00.635) 0:00:36.743 ********* 2025-07-12 13:54:33.079312 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:33.079323 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-07-12 13:54:33.079334 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 13:54:33.079344 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-07-12 13:54:33.079355 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-07-12 13:54:33.079366 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 13:54:33.079377 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 13:54:33.079388 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-07-12 13:54:33.079399 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 13:54:33.079409 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-07-12 13:54:33.079425 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 13:54:33.079436 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 13:54:33.079470 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 13:54:33.079481 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 13:54:33.079492 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 13:54:33.079503 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-07-12 13:54:33.079514 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 13:54:33.079524 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 13:54:33.079535 | orchestrator | 2025-07-12 13:54:33.079546 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 13:54:33.079557 | orchestrator | Saturday 12 July 2025 13:43:21 +0000 (0:00:02.515) 0:00:39.258 ********* 2025-07-12 13:54:33.079568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:33.079579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:33.079590 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:33.079601 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.079611 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 13:54:33.079622 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 13:54:33.079633 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 13:54:33.079644 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 13:54:33.079654 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 13:54:33.079665 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 13:54:33.079676 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.079687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 13:54:33.079704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 13:54:33.079715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 13:54:33.079726 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.079737 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 13:54:33.079747 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 13:54:33.079758 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 13:54:33.079769 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.079780 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.079791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 13:54:33.079801 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 13:54:33.079812 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 13:54:33.079823 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.079834 | orchestrator | 2025-07-12 13:54:33.079845 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 13:54:33.079855 | orchestrator | Saturday 12 July 2025 13:43:22 +0000 (0:00:00.667) 0:00:39.926 ********* 2025-07-12 13:54:33.079866 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.079877 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.079888 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.079899 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.079910 | orchestrator | 2025-07-12 13:54:33.079921 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 13:54:33.079932 | orchestrator | Saturday 12 July 2025 13:43:23 +0000 (0:00:01.096) 0:00:41.023 ********* 2025-07-12 13:54:33.079943 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.079954 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.079965 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.079975 | orchestrator | 2025-07-12 13:54:33.079986 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 13:54:33.080006 | orchestrator | Saturday 12 July 2025 13:43:23 +0000 (0:00:00.639) 0:00:41.662 ********* 2025-07-12 13:54:33.080017 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.080028 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.080039 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.080050 | orchestrator | 2025-07-12 13:54:33.080061 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 13:54:33.080071 | orchestrator | Saturday 12 July 2025 13:43:24 +0000 (0:00:00.476) 0:00:42.139 ********* 2025-07-12 13:54:33.080082 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.080093 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.080104 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.080115 | orchestrator | 2025-07-12 13:54:33.080126 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 13:54:33.080136 | orchestrator | Saturday 12 July 2025 13:43:24 +0000 (0:00:00.291) 0:00:42.431 ********* 2025-07-12 13:54:33.080147 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.080158 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.080169 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.080179 | orchestrator | 2025-07-12 13:54:33.080190 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 13:54:33.080201 | orchestrator | Saturday 12 July 2025 13:43:24 +0000 (0:00:00.359) 0:00:42.790 ********* 2025-07-12 13:54:33.080212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.080223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.080234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.080244 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.080255 | orchestrator | 2025-07-12 13:54:33.080266 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 13:54:33.080277 | orchestrator | Saturday 12 July 2025 13:43:25 +0000 (0:00:00.533) 0:00:43.324 ********* 2025-07-12 13:54:33.080292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.080303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.080335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.080347 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.080358 | orchestrator | 2025-07-12 13:54:33.080369 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 13:54:33.080380 | orchestrator | Saturday 12 July 2025 13:43:26 +0000 (0:00:00.762) 0:00:44.087 ********* 2025-07-12 13:54:33.080390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.080401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.080412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.080423 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.080433 | orchestrator | 2025-07-12 13:54:33.080500 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 13:54:33.080512 | orchestrator | Saturday 12 July 2025 13:43:27 +0000 (0:00:01.042) 0:00:45.129 ********* 2025-07-12 13:54:33.080523 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.080534 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.080545 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.080556 | orchestrator | 2025-07-12 13:54:33.080567 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 13:54:33.080578 | orchestrator | Saturday 12 July 2025 13:43:27 +0000 (0:00:00.684) 0:00:45.814 ********* 2025-07-12 13:54:33.080588 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 13:54:33.080599 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 13:54:33.080610 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 13:54:33.080621 | orchestrator | 2025-07-12 13:54:33.080632 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 13:54:33.080650 | orchestrator | Saturday 12 July 2025 13:43:29 +0000 (0:00:01.263) 0:00:47.078 ********* 2025-07-12 13:54:33.080667 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:33.080679 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:33.080690 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:33.080701 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 13:54:33.080712 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 13:54:33.080723 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 13:54:33.080733 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 13:54:33.080744 | orchestrator | 2025-07-12 13:54:33.080755 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 13:54:33.080766 | orchestrator | Saturday 12 July 2025 13:43:29 +0000 (0:00:00.777) 0:00:47.855 ********* 2025-07-12 13:54:33.080777 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:33.080788 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:33.080798 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:33.080809 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 13:54:33.080820 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 13:54:33.080831 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 13:54:33.080841 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 13:54:33.080852 | orchestrator | 2025-07-12 13:54:33.080863 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:33.080874 | orchestrator | Saturday 12 July 2025 13:43:31 +0000 (0:00:01.993) 0:00:49.849 ********* 2025-07-12 13:54:33.080885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.080898 | orchestrator | 2025-07-12 13:54:33.080909 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:33.080919 | orchestrator | Saturday 12 July 2025 13:43:33 +0000 (0:00:01.194) 0:00:51.044 ********* 2025-07-12 13:54:33.080930 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.080941 | orchestrator | 2025-07-12 13:54:33.080952 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:33.080963 | orchestrator | Saturday 12 July 2025 13:43:34 +0000 (0:00:01.555) 0:00:52.599 ********* 2025-07-12 13:54:33.080974 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.080985 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.080996 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.081006 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.081016 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.081025 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.081035 | orchestrator | 2025-07-12 13:54:33.081044 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:33.081054 | orchestrator | Saturday 12 July 2025 13:43:35 +0000 (0:00:00.831) 0:00:53.431 ********* 2025-07-12 13:54:33.081064 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.081073 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.081083 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.081092 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.081102 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.081116 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.081131 | orchestrator | 2025-07-12 13:54:33.081141 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:33.081151 | orchestrator | Saturday 12 July 2025 13:43:36 +0000 (0:00:01.206) 0:00:54.637 ********* 2025-07-12 13:54:33.081161 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.081170 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.081180 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.081189 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.081199 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.081209 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.081218 | orchestrator | 2025-07-12 13:54:33.081228 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:33.081238 | orchestrator | Saturday 12 July 2025 13:43:38 +0000 (0:00:01.281) 0:00:55.919 ********* 2025-07-12 13:54:33.081247 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.081257 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.081267 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.081276 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.081286 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.081295 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.081305 | orchestrator | 2025-07-12 13:54:33.081315 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:33.081325 | orchestrator | Saturday 12 July 2025 13:43:39 +0000 (0:00:01.262) 0:00:57.182 ********* 2025-07-12 13:54:33.081335 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.081344 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.081354 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.081364 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.081373 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.081383 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.081393 | orchestrator | 2025-07-12 13:54:33.081402 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:33.081412 | orchestrator | Saturday 12 July 2025 13:43:40 +0000 (0:00:01.026) 0:00:58.209 ********* 2025-07-12 13:54:33.081426 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.081436 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.081461 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.081471 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.081481 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.081491 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.081500 | orchestrator | 2025-07-12 13:54:33.081510 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:33.081520 | orchestrator | Saturday 12 July 2025 13:43:41 +0000 (0:00:00.736) 0:00:58.945 ********* 2025-07-12 13:54:33.081529 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.081539 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.081549 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.081558 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.081568 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.081577 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.081587 | orchestrator | 2025-07-12 13:54:33.081596 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:33.081606 | orchestrator | Saturday 12 July 2025 13:43:42 +0000 (0:00:01.347) 0:01:00.292 ********* 2025-07-12 13:54:33.081616 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.081625 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.081635 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.081645 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.081654 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.081664 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.081673 | orchestrator | 2025-07-12 13:54:33.081683 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:33.081693 | orchestrator | Saturday 12 July 2025 13:43:44 +0000 (0:00:01.670) 0:01:01.963 ********* 2025-07-12 13:54:33.081708 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.081718 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.081728 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.081737 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.081747 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.081756 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.081766 | orchestrator | 2025-07-12 13:54:33.081776 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:33.081785 | orchestrator | Saturday 12 July 2025 13:43:45 +0000 (0:00:01.794) 0:01:03.757 ********* 2025-07-12 13:54:33.081795 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.081804 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.081814 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.081824 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.081833 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.081843 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.081852 | orchestrator | 2025-07-12 13:54:33.081862 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:33.081872 | orchestrator | Saturday 12 July 2025 13:43:46 +0000 (0:00:00.685) 0:01:04.442 ********* 2025-07-12 13:54:33.081881 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.081891 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.081900 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.081910 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.081919 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.081929 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.081938 | orchestrator | 2025-07-12 13:54:33.081948 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:33.081958 | orchestrator | Saturday 12 July 2025 13:43:47 +0000 (0:00:00.917) 0:01:05.360 ********* 2025-07-12 13:54:33.081967 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.081977 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.081986 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.081996 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.082005 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.082041 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.082054 | orchestrator | 2025-07-12 13:54:33.082064 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:33.082074 | orchestrator | Saturday 12 July 2025 13:43:48 +0000 (0:00:00.996) 0:01:06.357 ********* 2025-07-12 13:54:33.082084 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.082093 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.082103 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.082113 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.082127 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.082137 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.082146 | orchestrator | 2025-07-12 13:54:33.082156 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:33.082166 | orchestrator | Saturday 12 July 2025 13:43:49 +0000 (0:00:00.996) 0:01:07.354 ********* 2025-07-12 13:54:33.082176 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.082185 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.082195 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.082204 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.082214 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.082224 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.082233 | orchestrator | 2025-07-12 13:54:33.082243 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:33.082253 | orchestrator | Saturday 12 July 2025 13:43:50 +0000 (0:00:00.682) 0:01:08.037 ********* 2025-07-12 13:54:33.082262 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.082272 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.082282 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.082292 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.082307 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.082316 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.082326 | orchestrator | 2025-07-12 13:54:33.082336 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:33.082345 | orchestrator | Saturday 12 July 2025 13:43:51 +0000 (0:00:01.058) 0:01:09.095 ********* 2025-07-12 13:54:33.082355 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.082364 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.082374 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.082384 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.082393 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.082403 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.082412 | orchestrator | 2025-07-12 13:54:33.082422 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:33.082464 | orchestrator | Saturday 12 July 2025 13:43:52 +0000 (0:00:00.784) 0:01:09.880 ********* 2025-07-12 13:54:33.082475 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.082485 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.082495 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.082504 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.082514 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.082523 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.082533 | orchestrator | 2025-07-12 13:54:33.082542 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:33.082552 | orchestrator | Saturday 12 July 2025 13:43:52 +0000 (0:00:00.778) 0:01:10.658 ********* 2025-07-12 13:54:33.082561 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.082571 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.082581 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.082590 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.082600 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.082609 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.082619 | orchestrator | 2025-07-12 13:54:33.082628 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:33.082638 | orchestrator | Saturday 12 July 2025 13:43:53 +0000 (0:00:00.564) 0:01:11.223 ********* 2025-07-12 13:54:33.082648 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.082657 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.082666 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.082676 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.082685 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.082695 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.082704 | orchestrator | 2025-07-12 13:54:33.082714 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-07-12 13:54:33.082724 | orchestrator | Saturday 12 July 2025 13:43:54 +0000 (0:00:01.203) 0:01:12.426 ********* 2025-07-12 13:54:33.082733 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.082743 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.082753 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.082762 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.082771 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.082781 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.082791 | orchestrator | 2025-07-12 13:54:33.082800 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-07-12 13:54:33.082810 | orchestrator | Saturday 12 July 2025 13:43:56 +0000 (0:00:01.700) 0:01:14.126 ********* 2025-07-12 13:54:33.082819 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.082829 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.082839 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.082848 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.082858 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.082880 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.082890 | orchestrator | 2025-07-12 13:54:33.082899 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-07-12 13:54:33.082909 | orchestrator | Saturday 12 July 2025 13:43:58 +0000 (0:00:02.006) 0:01:16.133 ********* 2025-07-12 13:54:33.082925 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.082935 | orchestrator | 2025-07-12 13:54:33.082945 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-07-12 13:54:33.082954 | orchestrator | Saturday 12 July 2025 13:43:59 +0000 (0:00:01.159) 0:01:17.292 ********* 2025-07-12 13:54:33.082964 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.082973 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.082983 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.082992 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.083002 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.083012 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.083021 | orchestrator | 2025-07-12 13:54:33.083031 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-07-12 13:54:33.083041 | orchestrator | Saturday 12 July 2025 13:44:00 +0000 (0:00:00.816) 0:01:18.109 ********* 2025-07-12 13:54:33.083050 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.083060 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.083069 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.083083 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.083093 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.083103 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.083112 | orchestrator | 2025-07-12 13:54:33.083122 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-07-12 13:54:33.083131 | orchestrator | Saturday 12 July 2025 13:44:00 +0000 (0:00:00.573) 0:01:18.682 ********* 2025-07-12 13:54:33.083141 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:33.083150 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:33.083160 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:33.083170 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:33.083179 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:33.083189 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:33.083198 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:33.083208 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 13:54:33.083217 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:33.083227 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:33.083236 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:33.083246 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 13:54:33.083256 | orchestrator | 2025-07-12 13:54:33.083271 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-07-12 13:54:33.083281 | orchestrator | Saturday 12 July 2025 13:44:02 +0000 (0:00:01.589) 0:01:20.271 ********* 2025-07-12 13:54:33.083291 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.083300 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.083310 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.083319 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.083329 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.083339 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.083348 | orchestrator | 2025-07-12 13:54:33.083358 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-07-12 13:54:33.083368 | orchestrator | Saturday 12 July 2025 13:44:03 +0000 (0:00:00.919) 0:01:21.190 ********* 2025-07-12 13:54:33.083383 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.083393 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.083403 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.083412 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.083422 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.083431 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.083458 | orchestrator | 2025-07-12 13:54:33.083468 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-07-12 13:54:33.083478 | orchestrator | Saturday 12 July 2025 13:44:04 +0000 (0:00:00.796) 0:01:21.987 ********* 2025-07-12 13:54:33.083488 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.083497 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.083507 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.083516 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.083526 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.083535 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.083545 | orchestrator | 2025-07-12 13:54:33.083555 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-07-12 13:54:33.083564 | orchestrator | Saturday 12 July 2025 13:44:04 +0000 (0:00:00.555) 0:01:22.542 ********* 2025-07-12 13:54:33.083574 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.083584 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.083593 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.083602 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.083612 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.083621 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.083631 | orchestrator | 2025-07-12 13:54:33.083640 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-07-12 13:54:33.083650 | orchestrator | Saturday 12 July 2025 13:44:05 +0000 (0:00:00.766) 0:01:23.309 ********* 2025-07-12 13:54:33.083660 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.083670 | orchestrator | 2025-07-12 13:54:33.083679 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-07-12 13:54:33.083689 | orchestrator | Saturday 12 July 2025 13:44:06 +0000 (0:00:01.203) 0:01:24.513 ********* 2025-07-12 13:54:33.083698 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.083708 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.083717 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.083727 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.083736 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.083746 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.083755 | orchestrator | 2025-07-12 13:54:33.083765 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-07-12 13:54:33.083775 | orchestrator | Saturday 12 July 2025 13:45:51 +0000 (0:01:44.854) 0:03:09.367 ********* 2025-07-12 13:54:33.083785 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:33.083794 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:33.083804 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:33.083813 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.083827 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:33.083837 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:33.083847 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:33.083856 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.083866 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:33.083875 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:33.083890 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:33.083900 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.083910 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:33.083919 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:33.083929 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:33.083938 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.083948 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:33.083958 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:33.083967 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:33.083977 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.083986 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 13:54:33.083996 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 13:54:33.084005 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 13:54:33.084019 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.084029 | orchestrator | 2025-07-12 13:54:33.084039 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-07-12 13:54:33.084049 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.759) 0:03:10.127 ********* 2025-07-12 13:54:33.084058 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.084068 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.084077 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.084087 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.084097 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.084106 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.084116 | orchestrator | 2025-07-12 13:54:33.084125 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-07-12 13:54:33.084135 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.537) 0:03:10.665 ********* 2025-07-12 13:54:33.084144 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.084154 | orchestrator | 2025-07-12 13:54:33.084163 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-07-12 13:54:33.084173 | orchestrator | Saturday 12 July 2025 13:45:52 +0000 (0:00:00.126) 0:03:10.791 ********* 2025-07-12 13:54:33.084182 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.084192 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.084202 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.084211 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.084221 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.084230 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.084240 | orchestrator | 2025-07-12 13:54:33.084249 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-07-12 13:54:33.084259 | orchestrator | Saturday 12 July 2025 13:45:53 +0000 (0:00:00.718) 0:03:11.510 ********* 2025-07-12 13:54:33.084268 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.084278 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.084287 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.084297 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.084306 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.084316 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.084325 | orchestrator | 2025-07-12 13:54:33.084335 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-07-12 13:54:33.084345 | orchestrator | Saturday 12 July 2025 13:45:54 +0000 (0:00:00.697) 0:03:12.208 ********* 2025-07-12 13:54:33.084354 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.084364 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.084373 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.084388 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.084398 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.084407 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.084417 | orchestrator | 2025-07-12 13:54:33.084427 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-07-12 13:54:33.084436 | orchestrator | Saturday 12 July 2025 13:45:55 +0000 (0:00:00.706) 0:03:12.914 ********* 2025-07-12 13:54:33.084487 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.084497 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.084507 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.084517 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.084526 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.084536 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.084545 | orchestrator | 2025-07-12 13:54:33.084555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-07-12 13:54:33.084565 | orchestrator | Saturday 12 July 2025 13:45:57 +0000 (0:00:02.505) 0:03:15.420 ********* 2025-07-12 13:54:33.084575 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.084584 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.084594 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.084603 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.084613 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.084622 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.084632 | orchestrator | 2025-07-12 13:54:33.084642 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-07-12 13:54:33.084651 | orchestrator | Saturday 12 July 2025 13:45:58 +0000 (0:00:00.722) 0:03:16.142 ********* 2025-07-12 13:54:33.084665 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.084677 | orchestrator | 2025-07-12 13:54:33.084686 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-07-12 13:54:33.084696 | orchestrator | Saturday 12 July 2025 13:45:59 +0000 (0:00:01.079) 0:03:17.222 ********* 2025-07-12 13:54:33.084705 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.084715 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.084725 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.084734 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.084742 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.084750 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.084758 | orchestrator | 2025-07-12 13:54:33.084766 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-07-12 13:54:33.084774 | orchestrator | Saturday 12 July 2025 13:46:00 +0000 (0:00:00.694) 0:03:17.917 ********* 2025-07-12 13:54:33.084781 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.084789 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.084797 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.084805 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.084812 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.084820 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.084828 | orchestrator | 2025-07-12 13:54:33.084836 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-07-12 13:54:33.084844 | orchestrator | Saturday 12 July 2025 13:46:00 +0000 (0:00:00.827) 0:03:18.744 ********* 2025-07-12 13:54:33.084851 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.084859 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.084867 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.084875 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.084883 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.084890 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.084898 | orchestrator | 2025-07-12 13:54:33.084906 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-07-12 13:54:33.084919 | orchestrator | Saturday 12 July 2025 13:46:01 +0000 (0:00:00.659) 0:03:19.404 ********* 2025-07-12 13:54:33.084939 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.084948 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.084955 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.084963 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.084971 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.084979 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.084987 | orchestrator | 2025-07-12 13:54:33.084995 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-07-12 13:54:33.085003 | orchestrator | Saturday 12 July 2025 13:46:02 +0000 (0:00:00.890) 0:03:20.295 ********* 2025-07-12 13:54:33.085010 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.085018 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.085026 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.085034 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.085042 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.085049 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.085057 | orchestrator | 2025-07-12 13:54:33.085065 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-07-12 13:54:33.085073 | orchestrator | Saturday 12 July 2025 13:46:03 +0000 (0:00:00.626) 0:03:20.921 ********* 2025-07-12 13:54:33.085081 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.085089 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.085097 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.085105 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.085113 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.085120 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.085128 | orchestrator | 2025-07-12 13:54:33.085136 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-07-12 13:54:33.085144 | orchestrator | Saturday 12 July 2025 13:46:03 +0000 (0:00:00.867) 0:03:21.789 ********* 2025-07-12 13:54:33.085152 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.085160 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.085168 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.085175 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.085183 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.085191 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.085199 | orchestrator | 2025-07-12 13:54:33.085207 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-07-12 13:54:33.085215 | orchestrator | Saturday 12 July 2025 13:46:04 +0000 (0:00:00.651) 0:03:22.441 ********* 2025-07-12 13:54:33.085222 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.085230 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.085238 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.085246 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.085254 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.085261 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.085269 | orchestrator | 2025-07-12 13:54:33.085277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-07-12 13:54:33.085285 | orchestrator | Saturday 12 July 2025 13:46:05 +0000 (0:00:00.803) 0:03:23.245 ********* 2025-07-12 13:54:33.085293 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.085301 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.085309 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.085317 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.085325 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.085333 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.085341 | orchestrator | 2025-07-12 13:54:33.085349 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-07-12 13:54:33.085357 | orchestrator | Saturday 12 July 2025 13:46:06 +0000 (0:00:01.061) 0:03:24.307 ********* 2025-07-12 13:54:33.085365 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.085378 | orchestrator | 2025-07-12 13:54:33.085386 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-07-12 13:54:33.085394 | orchestrator | Saturday 12 July 2025 13:46:07 +0000 (0:00:01.027) 0:03:25.335 ********* 2025-07-12 13:54:33.085405 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-07-12 13:54:33.085413 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-07-12 13:54:33.085421 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-07-12 13:54:33.085429 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-07-12 13:54:33.085437 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-07-12 13:54:33.085457 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-07-12 13:54:33.085465 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-07-12 13:54:33.085473 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-07-12 13:54:33.085481 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-07-12 13:54:33.085489 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:33.085496 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-07-12 13:54:33.085504 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-07-12 13:54:33.085512 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:33.085520 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-07-12 13:54:33.085528 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:33.085536 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:33.085543 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:33.085551 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:33.085559 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:33.085567 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-07-12 13:54:33.085575 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:33.085587 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:33.085595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:33.085603 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:33.085610 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:33.085618 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-07-12 13:54:33.085626 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:33.085634 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:33.085642 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:33.085649 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:33.085657 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:33.085665 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-07-12 13:54:33.085673 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:33.085680 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:33.085688 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:33.085696 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:33.085704 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:33.085712 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-07-12 13:54:33.085719 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:33.085727 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:33.085735 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:33.085748 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:33.085756 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-07-12 13:54:33.085764 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:33.085771 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:33.085779 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:33.085787 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:33.085795 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:33.085803 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-07-12 13:54:33.085811 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:33.085819 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:33.085827 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:33.085834 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:33.085842 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:33.085850 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 13:54:33.085858 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:33.085866 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:33.085873 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:33.085881 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:33.085889 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:33.085897 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 13:54:33.085905 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:33.085912 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:33.085920 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:33.085928 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:33.086005 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 13:54:33.086055 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:33.086063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:33.086072 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:33.086079 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:33.086087 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:33.086095 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 13:54:33.086103 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:33.086110 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:33.086118 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:33.086126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:33.086134 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:33.086142 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:33.086150 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:33.086169 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:33.086178 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 13:54:33.086192 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:33.086200 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-07-12 13:54:33.086208 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-07-12 13:54:33.086216 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:33.086224 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-07-12 13:54:33.086232 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 13:54:33.086239 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-07-12 13:54:33.086247 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-07-12 13:54:33.086255 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-07-12 13:54:33.086263 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-07-12 13:54:33.086271 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-07-12 13:54:33.086279 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-07-12 13:54:33.086287 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-07-12 13:54:33.086295 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-07-12 13:54:33.086303 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-07-12 13:54:33.086310 | orchestrator | 2025-07-12 13:54:33.086318 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-07-12 13:54:33.086326 | orchestrator | Saturday 12 July 2025 13:46:14 +0000 (0:00:06.571) 0:03:31.906 ********* 2025-07-12 13:54:33.086334 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.086342 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.086350 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.086358 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.086366 | orchestrator | 2025-07-12 13:54:33.086374 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-07-12 13:54:33.086382 | orchestrator | Saturday 12 July 2025 13:46:14 +0000 (0:00:00.943) 0:03:32.850 ********* 2025-07-12 13:54:33.086390 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.086398 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.086406 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.086414 | orchestrator | 2025-07-12 13:54:33.086422 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-07-12 13:54:33.086430 | orchestrator | Saturday 12 July 2025 13:46:15 +0000 (0:00:00.719) 0:03:33.570 ********* 2025-07-12 13:54:33.086449 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.086458 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.086466 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.086474 | orchestrator | 2025-07-12 13:54:33.086482 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-07-12 13:54:33.086493 | orchestrator | Saturday 12 July 2025 13:46:17 +0000 (0:00:01.338) 0:03:34.908 ********* 2025-07-12 13:54:33.086501 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.086509 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.086517 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.086525 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.086538 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.086546 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.086553 | orchestrator | 2025-07-12 13:54:33.086561 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-07-12 13:54:33.086569 | orchestrator | Saturday 12 July 2025 13:46:17 +0000 (0:00:00.680) 0:03:35.589 ********* 2025-07-12 13:54:33.086577 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.086585 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.086593 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.086600 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.086608 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.086616 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.086624 | orchestrator | 2025-07-12 13:54:33.086632 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-07-12 13:54:33.086640 | orchestrator | Saturday 12 July 2025 13:46:18 +0000 (0:00:00.860) 0:03:36.449 ********* 2025-07-12 13:54:33.086648 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.086655 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.086663 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.086671 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.086679 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.086686 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.086694 | orchestrator | 2025-07-12 13:54:33.086702 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-07-12 13:54:33.086710 | orchestrator | Saturday 12 July 2025 13:46:19 +0000 (0:00:00.627) 0:03:37.077 ********* 2025-07-12 13:54:33.086718 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.086726 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.086738 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.086746 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.086754 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.086762 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.086769 | orchestrator | 2025-07-12 13:54:33.086777 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-07-12 13:54:33.086785 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:00.898) 0:03:37.975 ********* 2025-07-12 13:54:33.086793 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.086801 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.086809 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.086816 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.086824 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.086832 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.086840 | orchestrator | 2025-07-12 13:54:33.086848 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-07-12 13:54:33.086856 | orchestrator | Saturday 12 July 2025 13:46:20 +0000 (0:00:00.661) 0:03:38.637 ********* 2025-07-12 13:54:33.086864 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.086871 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.086879 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.086887 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.086895 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.086903 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.086910 | orchestrator | 2025-07-12 13:54:33.086918 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-07-12 13:54:33.086926 | orchestrator | Saturday 12 July 2025 13:46:21 +0000 (0:00:01.010) 0:03:39.647 ********* 2025-07-12 13:54:33.086934 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.086942 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.086950 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.086958 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.086966 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.086973 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.086981 | orchestrator | 2025-07-12 13:54:33.086994 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-07-12 13:54:33.087002 | orchestrator | Saturday 12 July 2025 13:46:22 +0000 (0:00:00.715) 0:03:40.363 ********* 2025-07-12 13:54:33.087010 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087018 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087026 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087034 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.087042 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.087050 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.087057 | orchestrator | 2025-07-12 13:54:33.087065 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-07-12 13:54:33.087073 | orchestrator | Saturday 12 July 2025 13:46:23 +0000 (0:00:00.826) 0:03:41.190 ********* 2025-07-12 13:54:33.087081 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087089 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087097 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087105 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.087113 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.087121 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.087129 | orchestrator | 2025-07-12 13:54:33.087137 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-07-12 13:54:33.087145 | orchestrator | Saturday 12 July 2025 13:46:26 +0000 (0:00:03.473) 0:03:44.663 ********* 2025-07-12 13:54:33.087153 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087161 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087168 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087176 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.087184 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.087192 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.087200 | orchestrator | 2025-07-12 13:54:33.087208 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-07-12 13:54:33.087216 | orchestrator | Saturday 12 July 2025 13:46:27 +0000 (0:00:00.826) 0:03:45.489 ********* 2025-07-12 13:54:33.087224 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087232 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087243 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087251 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.087259 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.087267 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.087275 | orchestrator | 2025-07-12 13:54:33.087283 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-07-12 13:54:33.087291 | orchestrator | Saturday 12 July 2025 13:46:28 +0000 (0:00:00.529) 0:03:46.018 ********* 2025-07-12 13:54:33.087299 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087307 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087315 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087322 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.087330 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.087338 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.087346 | orchestrator | 2025-07-12 13:54:33.087354 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-07-12 13:54:33.087361 | orchestrator | Saturday 12 July 2025 13:46:28 +0000 (0:00:00.573) 0:03:46.592 ********* 2025-07-12 13:54:33.087369 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087377 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087385 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087393 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.087401 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.087409 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.087422 | orchestrator | 2025-07-12 13:54:33.087430 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-07-12 13:54:33.087477 | orchestrator | Saturday 12 July 2025 13:46:29 +0000 (0:00:00.492) 0:03:47.085 ********* 2025-07-12 13:54:33.087487 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087495 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087503 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087512 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-07-12 13:54:33.087521 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-07-12 13:54:33.087531 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-07-12 13:54:33.087539 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-07-12 13:54:33.087547 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.087555 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.087563 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-07-12 13:54:33.087571 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-07-12 13:54:33.087579 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.087587 | orchestrator | 2025-07-12 13:54:33.087595 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-07-12 13:54:33.087603 | orchestrator | Saturday 12 July 2025 13:46:29 +0000 (0:00:00.735) 0:03:47.820 ********* 2025-07-12 13:54:33.087611 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087619 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087627 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087635 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.087642 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.087650 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.087658 | orchestrator | 2025-07-12 13:54:33.087666 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-07-12 13:54:33.087674 | orchestrator | Saturday 12 July 2025 13:46:30 +0000 (0:00:00.594) 0:03:48.415 ********* 2025-07-12 13:54:33.087682 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087693 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087701 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087709 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.087717 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.087725 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.087738 | orchestrator | 2025-07-12 13:54:33.087746 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 13:54:33.087754 | orchestrator | Saturday 12 July 2025 13:46:31 +0000 (0:00:00.845) 0:03:49.260 ********* 2025-07-12 13:54:33.087761 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087767 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087774 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087781 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.087787 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.087794 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.087800 | orchestrator | 2025-07-12 13:54:33.087807 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 13:54:33.087814 | orchestrator | Saturday 12 July 2025 13:46:32 +0000 (0:00:00.753) 0:03:50.014 ********* 2025-07-12 13:54:33.087820 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087827 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087833 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087840 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.087846 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.087853 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.087860 | orchestrator | 2025-07-12 13:54:33.087866 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 13:54:33.087873 | orchestrator | Saturday 12 July 2025 13:46:33 +0000 (0:00:00.933) 0:03:50.947 ********* 2025-07-12 13:54:33.087879 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087886 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087893 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087903 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.087910 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.087917 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.087923 | orchestrator | 2025-07-12 13:54:33.087930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 13:54:33.087937 | orchestrator | Saturday 12 July 2025 13:46:33 +0000 (0:00:00.724) 0:03:51.672 ********* 2025-07-12 13:54:33.087943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.087950 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.087957 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.087964 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.087970 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.087977 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.087984 | orchestrator | 2025-07-12 13:54:33.087990 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 13:54:33.087997 | orchestrator | Saturday 12 July 2025 13:46:34 +0000 (0:00:00.954) 0:03:52.626 ********* 2025-07-12 13:54:33.088004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 13:54:33.088010 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 13:54:33.088017 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 13:54:33.088024 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.088030 | orchestrator | 2025-07-12 13:54:33.088037 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 13:54:33.088044 | orchestrator | Saturday 12 July 2025 13:46:35 +0000 (0:00:00.415) 0:03:53.042 ********* 2025-07-12 13:54:33.088050 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 13:54:33.088057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 13:54:33.088064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 13:54:33.088071 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.088077 | orchestrator | 2025-07-12 13:54:33.088084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 13:54:33.088091 | orchestrator | Saturday 12 July 2025 13:46:35 +0000 (0:00:00.414) 0:03:53.457 ********* 2025-07-12 13:54:33.088097 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 13:54:33.088108 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 13:54:33.088115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 13:54:33.088122 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.088128 | orchestrator | 2025-07-12 13:54:33.088135 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 13:54:33.088142 | orchestrator | Saturday 12 July 2025 13:46:36 +0000 (0:00:00.424) 0:03:53.881 ********* 2025-07-12 13:54:33.088148 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.088155 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.088162 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.088168 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.088175 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.088182 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.088188 | orchestrator | 2025-07-12 13:54:33.088195 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 13:54:33.088202 | orchestrator | Saturday 12 July 2025 13:46:36 +0000 (0:00:00.630) 0:03:54.511 ********* 2025-07-12 13:54:33.088209 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-07-12 13:54:33.088215 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.088222 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-07-12 13:54:33.088228 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.088235 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-07-12 13:54:33.088242 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.088248 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 13:54:33.088255 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 13:54:33.088261 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 13:54:33.088268 | orchestrator | 2025-07-12 13:54:33.088275 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-07-12 13:54:33.088281 | orchestrator | Saturday 12 July 2025 13:46:38 +0000 (0:00:01.998) 0:03:56.510 ********* 2025-07-12 13:54:33.088288 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.088298 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.088305 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.088312 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.088318 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.088325 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.088332 | orchestrator | 2025-07-12 13:54:33.088338 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:33.088345 | orchestrator | Saturday 12 July 2025 13:46:41 +0000 (0:00:02.832) 0:03:59.343 ********* 2025-07-12 13:54:33.088352 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.088358 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.088365 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.088372 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.088378 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.088385 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.088391 | orchestrator | 2025-07-12 13:54:33.088398 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 13:54:33.088405 | orchestrator | Saturday 12 July 2025 13:46:42 +0000 (0:00:01.212) 0:04:00.556 ********* 2025-07-12 13:54:33.088411 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088418 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.088425 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.088432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.088452 | orchestrator | 2025-07-12 13:54:33.088459 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 13:54:33.088466 | orchestrator | Saturday 12 July 2025 13:46:43 +0000 (0:00:01.026) 0:04:01.582 ********* 2025-07-12 13:54:33.088473 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.088479 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.088492 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.088499 | orchestrator | 2025-07-12 13:54:33.088505 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 13:54:33.088515 | orchestrator | Saturday 12 July 2025 13:46:44 +0000 (0:00:00.316) 0:04:01.899 ********* 2025-07-12 13:54:33.088522 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.088529 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.088536 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.088542 | orchestrator | 2025-07-12 13:54:33.088549 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 13:54:33.088556 | orchestrator | Saturday 12 July 2025 13:46:45 +0000 (0:00:01.473) 0:04:03.373 ********* 2025-07-12 13:54:33.088562 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:33.088569 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:33.088575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:33.088582 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.088589 | orchestrator | 2025-07-12 13:54:33.088595 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 13:54:33.088602 | orchestrator | Saturday 12 July 2025 13:46:46 +0000 (0:00:00.625) 0:04:03.998 ********* 2025-07-12 13:54:33.088609 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.088615 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.088622 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.088629 | orchestrator | 2025-07-12 13:54:33.088635 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 13:54:33.088642 | orchestrator | Saturday 12 July 2025 13:46:46 +0000 (0:00:00.395) 0:04:04.393 ********* 2025-07-12 13:54:33.088649 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.088655 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.088662 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.088668 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.088675 | orchestrator | 2025-07-12 13:54:33.088682 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 13:54:33.088688 | orchestrator | Saturday 12 July 2025 13:46:47 +0000 (0:00:01.196) 0:04:05.590 ********* 2025-07-12 13:54:33.088695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.088701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.088708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.088715 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088721 | orchestrator | 2025-07-12 13:54:33.088728 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 13:54:33.088734 | orchestrator | Saturday 12 July 2025 13:46:48 +0000 (0:00:00.388) 0:04:05.979 ********* 2025-07-12 13:54:33.088741 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088748 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.088754 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.088761 | orchestrator | 2025-07-12 13:54:33.088767 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 13:54:33.088774 | orchestrator | Saturday 12 July 2025 13:46:48 +0000 (0:00:00.355) 0:04:06.335 ********* 2025-07-12 13:54:33.088781 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088787 | orchestrator | 2025-07-12 13:54:33.088794 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 13:54:33.088801 | orchestrator | Saturday 12 July 2025 13:46:48 +0000 (0:00:00.222) 0:04:06.558 ********* 2025-07-12 13:54:33.088807 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088814 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.088821 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.088827 | orchestrator | 2025-07-12 13:54:33.088834 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 13:54:33.088845 | orchestrator | Saturday 12 July 2025 13:46:48 +0000 (0:00:00.280) 0:04:06.838 ********* 2025-07-12 13:54:33.088851 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088858 | orchestrator | 2025-07-12 13:54:33.088865 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 13:54:33.088871 | orchestrator | Saturday 12 July 2025 13:46:49 +0000 (0:00:00.236) 0:04:07.074 ********* 2025-07-12 13:54:33.088881 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088888 | orchestrator | 2025-07-12 13:54:33.088895 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 13:54:33.088901 | orchestrator | Saturday 12 July 2025 13:46:49 +0000 (0:00:00.204) 0:04:07.278 ********* 2025-07-12 13:54:33.088908 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088915 | orchestrator | 2025-07-12 13:54:33.088921 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 13:54:33.088928 | orchestrator | Saturday 12 July 2025 13:46:49 +0000 (0:00:00.393) 0:04:07.672 ********* 2025-07-12 13:54:33.088935 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088942 | orchestrator | 2025-07-12 13:54:33.088948 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 13:54:33.088955 | orchestrator | Saturday 12 July 2025 13:46:50 +0000 (0:00:00.231) 0:04:07.904 ********* 2025-07-12 13:54:33.088962 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.088968 | orchestrator | 2025-07-12 13:54:33.088975 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 13:54:33.088982 | orchestrator | Saturday 12 July 2025 13:46:50 +0000 (0:00:00.229) 0:04:08.134 ********* 2025-07-12 13:54:33.088989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.088995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.089002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.089009 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.089015 | orchestrator | 2025-07-12 13:54:33.089022 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 13:54:33.089029 | orchestrator | Saturday 12 July 2025 13:46:50 +0000 (0:00:00.418) 0:04:08.552 ********* 2025-07-12 13:54:33.089035 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.089042 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.089049 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.089055 | orchestrator | 2025-07-12 13:54:33.089066 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 13:54:33.089072 | orchestrator | Saturday 12 July 2025 13:46:51 +0000 (0:00:00.334) 0:04:08.886 ********* 2025-07-12 13:54:33.089079 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.089086 | orchestrator | 2025-07-12 13:54:33.089093 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 13:54:33.089099 | orchestrator | Saturday 12 July 2025 13:46:51 +0000 (0:00:00.267) 0:04:09.154 ********* 2025-07-12 13:54:33.089106 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.089113 | orchestrator | 2025-07-12 13:54:33.089120 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 13:54:33.089126 | orchestrator | Saturday 12 July 2025 13:46:51 +0000 (0:00:00.237) 0:04:09.392 ********* 2025-07-12 13:54:33.089133 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.089140 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.089146 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.089153 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.089160 | orchestrator | 2025-07-12 13:54:33.089167 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 13:54:33.089173 | orchestrator | Saturday 12 July 2025 13:46:52 +0000 (0:00:01.163) 0:04:10.555 ********* 2025-07-12 13:54:33.089180 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.089193 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.089200 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.089207 | orchestrator | 2025-07-12 13:54:33.089214 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 13:54:33.089220 | orchestrator | Saturday 12 July 2025 13:46:53 +0000 (0:00:00.331) 0:04:10.887 ********* 2025-07-12 13:54:33.089227 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.089234 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.089240 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.089247 | orchestrator | 2025-07-12 13:54:33.089254 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 13:54:33.089260 | orchestrator | Saturday 12 July 2025 13:46:54 +0000 (0:00:01.314) 0:04:12.202 ********* 2025-07-12 13:54:33.089267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.089273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.089280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.089287 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.089293 | orchestrator | 2025-07-12 13:54:33.089300 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 13:54:33.089307 | orchestrator | Saturday 12 July 2025 13:46:55 +0000 (0:00:01.107) 0:04:13.310 ********* 2025-07-12 13:54:33.089313 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.089320 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.089326 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.089333 | orchestrator | 2025-07-12 13:54:33.089340 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 13:54:33.089346 | orchestrator | Saturday 12 July 2025 13:46:55 +0000 (0:00:00.378) 0:04:13.688 ********* 2025-07-12 13:54:33.089353 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.089360 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.089366 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.089373 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.089379 | orchestrator | 2025-07-12 13:54:33.089386 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 13:54:33.089392 | orchestrator | Saturday 12 July 2025 13:46:56 +0000 (0:00:01.106) 0:04:14.795 ********* 2025-07-12 13:54:33.089399 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.089405 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.089412 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.089419 | orchestrator | 2025-07-12 13:54:33.089425 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 13:54:33.089432 | orchestrator | Saturday 12 July 2025 13:46:57 +0000 (0:00:00.356) 0:04:15.152 ********* 2025-07-12 13:54:33.089454 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.089461 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.089468 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.089475 | orchestrator | 2025-07-12 13:54:33.089481 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 13:54:33.089488 | orchestrator | Saturday 12 July 2025 13:46:58 +0000 (0:00:01.345) 0:04:16.497 ********* 2025-07-12 13:54:33.089495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.089501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.089508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.089514 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.089521 | orchestrator | 2025-07-12 13:54:33.089527 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 13:54:33.089534 | orchestrator | Saturday 12 July 2025 13:46:59 +0000 (0:00:00.820) 0:04:17.317 ********* 2025-07-12 13:54:33.089541 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.089547 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.089554 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.089565 | orchestrator | 2025-07-12 13:54:33.089572 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-07-12 13:54:33.089578 | orchestrator | Saturday 12 July 2025 13:46:59 +0000 (0:00:00.375) 0:04:17.692 ********* 2025-07-12 13:54:33.089585 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.089591 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.089598 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.089605 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.089611 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.089618 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.089625 | orchestrator | 2025-07-12 13:54:33.089631 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 13:54:33.089638 | orchestrator | Saturday 12 July 2025 13:47:00 +0000 (0:00:00.819) 0:04:18.512 ********* 2025-07-12 13:54:33.089648 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.089655 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.089661 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.089668 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.089675 | orchestrator | 2025-07-12 13:54:33.089681 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 13:54:33.089688 | orchestrator | Saturday 12 July 2025 13:47:01 +0000 (0:00:01.024) 0:04:19.536 ********* 2025-07-12 13:54:33.089695 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.089701 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.089708 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.089714 | orchestrator | 2025-07-12 13:54:33.089721 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 13:54:33.089728 | orchestrator | Saturday 12 July 2025 13:47:02 +0000 (0:00:00.355) 0:04:19.892 ********* 2025-07-12 13:54:33.089734 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.089741 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.089748 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.089754 | orchestrator | 2025-07-12 13:54:33.089761 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 13:54:33.089767 | orchestrator | Saturday 12 July 2025 13:47:03 +0000 (0:00:01.301) 0:04:21.193 ********* 2025-07-12 13:54:33.089774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:33.089781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:33.089788 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:33.089794 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.089801 | orchestrator | 2025-07-12 13:54:33.089807 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 13:54:33.089814 | orchestrator | Saturday 12 July 2025 13:47:04 +0000 (0:00:00.823) 0:04:22.017 ********* 2025-07-12 13:54:33.089821 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.089827 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.089834 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.089840 | orchestrator | 2025-07-12 13:54:33.089847 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-07-12 13:54:33.089854 | orchestrator | 2025-07-12 13:54:33.089860 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:33.089867 | orchestrator | Saturday 12 July 2025 13:47:05 +0000 (0:00:00.912) 0:04:22.930 ********* 2025-07-12 13:54:33.089874 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.089880 | orchestrator | 2025-07-12 13:54:33.089887 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:33.089894 | orchestrator | Saturday 12 July 2025 13:47:05 +0000 (0:00:00.523) 0:04:23.453 ********* 2025-07-12 13:54:33.089900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.089911 | orchestrator | 2025-07-12 13:54:33.089918 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:33.089925 | orchestrator | Saturday 12 July 2025 13:47:06 +0000 (0:00:00.723) 0:04:24.176 ********* 2025-07-12 13:54:33.089931 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.089938 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.089945 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.089951 | orchestrator | 2025-07-12 13:54:33.089958 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:33.089965 | orchestrator | Saturday 12 July 2025 13:47:07 +0000 (0:00:00.720) 0:04:24.896 ********* 2025-07-12 13:54:33.089971 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.089978 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.089985 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.089991 | orchestrator | 2025-07-12 13:54:33.089998 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:33.090008 | orchestrator | Saturday 12 July 2025 13:47:07 +0000 (0:00:00.310) 0:04:25.207 ********* 2025-07-12 13:54:33.090072 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090082 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090089 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090095 | orchestrator | 2025-07-12 13:54:33.090102 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:33.090109 | orchestrator | Saturday 12 July 2025 13:47:07 +0000 (0:00:00.342) 0:04:25.550 ********* 2025-07-12 13:54:33.090115 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090122 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090129 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090135 | orchestrator | 2025-07-12 13:54:33.090142 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:33.090149 | orchestrator | Saturday 12 July 2025 13:47:08 +0000 (0:00:00.558) 0:04:26.108 ********* 2025-07-12 13:54:33.090155 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090162 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090169 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.090175 | orchestrator | 2025-07-12 13:54:33.090182 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:33.090189 | orchestrator | Saturday 12 July 2025 13:47:08 +0000 (0:00:00.707) 0:04:26.815 ********* 2025-07-12 13:54:33.090195 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090202 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090209 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090215 | orchestrator | 2025-07-12 13:54:33.090222 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:33.090229 | orchestrator | Saturday 12 July 2025 13:47:09 +0000 (0:00:00.350) 0:04:27.165 ********* 2025-07-12 13:54:33.090235 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090242 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090249 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090255 | orchestrator | 2025-07-12 13:54:33.090262 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:33.090290 | orchestrator | Saturday 12 July 2025 13:47:09 +0000 (0:00:00.337) 0:04:27.502 ********* 2025-07-12 13:54:33.090298 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090305 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090312 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.090318 | orchestrator | 2025-07-12 13:54:33.090325 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:33.090332 | orchestrator | Saturday 12 July 2025 13:47:10 +0000 (0:00:01.044) 0:04:28.547 ********* 2025-07-12 13:54:33.090338 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090345 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090352 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.090359 | orchestrator | 2025-07-12 13:54:33.090365 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:33.090378 | orchestrator | Saturday 12 July 2025 13:47:11 +0000 (0:00:00.776) 0:04:29.323 ********* 2025-07-12 13:54:33.090384 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090391 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090398 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090404 | orchestrator | 2025-07-12 13:54:33.090411 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:33.090417 | orchestrator | Saturday 12 July 2025 13:47:11 +0000 (0:00:00.312) 0:04:29.636 ********* 2025-07-12 13:54:33.090424 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090431 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090437 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.090478 | orchestrator | 2025-07-12 13:54:33.090485 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:33.090492 | orchestrator | Saturday 12 July 2025 13:47:12 +0000 (0:00:00.322) 0:04:29.959 ********* 2025-07-12 13:54:33.090498 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090505 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090511 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090518 | orchestrator | 2025-07-12 13:54:33.090525 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:33.090531 | orchestrator | Saturday 12 July 2025 13:47:12 +0000 (0:00:00.565) 0:04:30.524 ********* 2025-07-12 13:54:33.090538 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090545 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090551 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090558 | orchestrator | 2025-07-12 13:54:33.090564 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:33.090571 | orchestrator | Saturday 12 July 2025 13:47:12 +0000 (0:00:00.334) 0:04:30.859 ********* 2025-07-12 13:54:33.090577 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090584 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090591 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090597 | orchestrator | 2025-07-12 13:54:33.090604 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:33.090611 | orchestrator | Saturday 12 July 2025 13:47:13 +0000 (0:00:00.303) 0:04:31.162 ********* 2025-07-12 13:54:33.090617 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090624 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090631 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090637 | orchestrator | 2025-07-12 13:54:33.090643 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:33.090649 | orchestrator | Saturday 12 July 2025 13:47:13 +0000 (0:00:00.293) 0:04:31.455 ********* 2025-07-12 13:54:33.090655 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090661 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.090668 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.090674 | orchestrator | 2025-07-12 13:54:33.090680 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:33.090686 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.544) 0:04:32.000 ********* 2025-07-12 13:54:33.090692 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090698 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090705 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.090711 | orchestrator | 2025-07-12 13:54:33.090717 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:33.090723 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.430) 0:04:32.430 ********* 2025-07-12 13:54:33.090733 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090740 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090746 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.090752 | orchestrator | 2025-07-12 13:54:33.090758 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:33.090764 | orchestrator | Saturday 12 July 2025 13:47:14 +0000 (0:00:00.397) 0:04:32.828 ********* 2025-07-12 13:54:33.090775 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090782 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090788 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.090794 | orchestrator | 2025-07-12 13:54:33.090800 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-07-12 13:54:33.090806 | orchestrator | Saturday 12 July 2025 13:47:15 +0000 (0:00:00.864) 0:04:33.692 ********* 2025-07-12 13:54:33.090812 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090818 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090824 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.090830 | orchestrator | 2025-07-12 13:54:33.090837 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-07-12 13:54:33.090843 | orchestrator | Saturday 12 July 2025 13:47:16 +0000 (0:00:00.326) 0:04:34.018 ********* 2025-07-12 13:54:33.090849 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.090855 | orchestrator | 2025-07-12 13:54:33.090861 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-07-12 13:54:33.090868 | orchestrator | Saturday 12 July 2025 13:47:16 +0000 (0:00:00.527) 0:04:34.546 ********* 2025-07-12 13:54:33.090874 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.090880 | orchestrator | 2025-07-12 13:54:33.090886 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-07-12 13:54:33.090892 | orchestrator | Saturday 12 July 2025 13:47:16 +0000 (0:00:00.147) 0:04:34.694 ********* 2025-07-12 13:54:33.090898 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-12 13:54:33.090904 | orchestrator | 2025-07-12 13:54:33.090931 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-07-12 13:54:33.090939 | orchestrator | Saturday 12 July 2025 13:47:18 +0000 (0:00:01.465) 0:04:36.160 ********* 2025-07-12 13:54:33.090945 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090952 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090958 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.090964 | orchestrator | 2025-07-12 13:54:33.090970 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-07-12 13:54:33.090976 | orchestrator | Saturday 12 July 2025 13:47:18 +0000 (0:00:00.339) 0:04:36.499 ********* 2025-07-12 13:54:33.090982 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.090989 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.090995 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.091001 | orchestrator | 2025-07-12 13:54:33.091007 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-07-12 13:54:33.091013 | orchestrator | Saturday 12 July 2025 13:47:18 +0000 (0:00:00.322) 0:04:36.822 ********* 2025-07-12 13:54:33.091019 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091026 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091032 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091038 | orchestrator | 2025-07-12 13:54:33.091044 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-07-12 13:54:33.091050 | orchestrator | Saturday 12 July 2025 13:47:20 +0000 (0:00:01.204) 0:04:38.027 ********* 2025-07-12 13:54:33.091056 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091062 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091069 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091075 | orchestrator | 2025-07-12 13:54:33.091081 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-07-12 13:54:33.091087 | orchestrator | Saturday 12 July 2025 13:47:21 +0000 (0:00:01.078) 0:04:39.105 ********* 2025-07-12 13:54:33.091093 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091099 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091106 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091112 | orchestrator | 2025-07-12 13:54:33.091118 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-07-12 13:54:33.091129 | orchestrator | Saturday 12 July 2025 13:47:22 +0000 (0:00:00.798) 0:04:39.903 ********* 2025-07-12 13:54:33.091135 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.091142 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.091148 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.091154 | orchestrator | 2025-07-12 13:54:33.091160 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-07-12 13:54:33.091166 | orchestrator | Saturday 12 July 2025 13:47:22 +0000 (0:00:00.714) 0:04:40.618 ********* 2025-07-12 13:54:33.091172 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091178 | orchestrator | 2025-07-12 13:54:33.091184 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-07-12 13:54:33.091191 | orchestrator | Saturday 12 July 2025 13:47:24 +0000 (0:00:01.340) 0:04:41.958 ********* 2025-07-12 13:54:33.091197 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.091203 | orchestrator | 2025-07-12 13:54:33.091209 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-07-12 13:54:33.091215 | orchestrator | Saturday 12 July 2025 13:47:24 +0000 (0:00:00.716) 0:04:42.674 ********* 2025-07-12 13:54:33.091221 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:33.091228 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.091234 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.091240 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:54:33.091246 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-07-12 13:54:33.091252 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:54:33.091258 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:54:33.091264 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-07-12 13:54:33.091270 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:54:33.091280 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-07-12 13:54:33.091286 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-07-12 13:54:33.091292 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-07-12 13:54:33.091298 | orchestrator | 2025-07-12 13:54:33.091304 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-07-12 13:54:33.091310 | orchestrator | Saturday 12 July 2025 13:47:28 +0000 (0:00:03.661) 0:04:46.335 ********* 2025-07-12 13:54:33.091317 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091323 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091329 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091335 | orchestrator | 2025-07-12 13:54:33.091341 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-07-12 13:54:33.091347 | orchestrator | Saturday 12 July 2025 13:47:29 +0000 (0:00:01.413) 0:04:47.749 ********* 2025-07-12 13:54:33.091353 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.091359 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.091365 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.091372 | orchestrator | 2025-07-12 13:54:33.091378 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-07-12 13:54:33.091384 | orchestrator | Saturday 12 July 2025 13:47:30 +0000 (0:00:00.327) 0:04:48.077 ********* 2025-07-12 13:54:33.091390 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.091396 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.091402 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.091408 | orchestrator | 2025-07-12 13:54:33.091415 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-07-12 13:54:33.091421 | orchestrator | Saturday 12 July 2025 13:47:30 +0000 (0:00:00.317) 0:04:48.395 ********* 2025-07-12 13:54:33.091427 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091433 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091451 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091458 | orchestrator | 2025-07-12 13:54:33.091469 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-07-12 13:54:33.091493 | orchestrator | Saturday 12 July 2025 13:47:32 +0000 (0:00:01.791) 0:04:50.186 ********* 2025-07-12 13:54:33.091500 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091506 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091512 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091519 | orchestrator | 2025-07-12 13:54:33.091525 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-07-12 13:54:33.091531 | orchestrator | Saturday 12 July 2025 13:47:34 +0000 (0:00:01.694) 0:04:51.881 ********* 2025-07-12 13:54:33.091537 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.091543 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.091549 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.091555 | orchestrator | 2025-07-12 13:54:33.091562 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-07-12 13:54:33.091568 | orchestrator | Saturday 12 July 2025 13:47:34 +0000 (0:00:00.318) 0:04:52.199 ********* 2025-07-12 13:54:33.091574 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.091580 | orchestrator | 2025-07-12 13:54:33.091586 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-07-12 13:54:33.091592 | orchestrator | Saturday 12 July 2025 13:47:34 +0000 (0:00:00.533) 0:04:52.733 ********* 2025-07-12 13:54:33.091598 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.091605 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.091611 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.091617 | orchestrator | 2025-07-12 13:54:33.091623 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-07-12 13:54:33.091629 | orchestrator | Saturday 12 July 2025 13:47:35 +0000 (0:00:00.562) 0:04:53.296 ********* 2025-07-12 13:54:33.091636 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.091642 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.091648 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.091654 | orchestrator | 2025-07-12 13:54:33.091660 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-07-12 13:54:33.091666 | orchestrator | Saturday 12 July 2025 13:47:35 +0000 (0:00:00.330) 0:04:53.626 ********* 2025-07-12 13:54:33.091672 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.091679 | orchestrator | 2025-07-12 13:54:33.091685 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-07-12 13:54:33.091691 | orchestrator | Saturday 12 July 2025 13:47:36 +0000 (0:00:00.488) 0:04:54.115 ********* 2025-07-12 13:54:33.091697 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091703 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091709 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091716 | orchestrator | 2025-07-12 13:54:33.091722 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-07-12 13:54:33.091728 | orchestrator | Saturday 12 July 2025 13:47:38 +0000 (0:00:02.053) 0:04:56.169 ********* 2025-07-12 13:54:33.091734 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091740 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091747 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091753 | orchestrator | 2025-07-12 13:54:33.091759 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-07-12 13:54:33.091765 | orchestrator | Saturday 12 July 2025 13:47:39 +0000 (0:00:01.237) 0:04:57.407 ********* 2025-07-12 13:54:33.091771 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091777 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091783 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091789 | orchestrator | 2025-07-12 13:54:33.091796 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-07-12 13:54:33.091802 | orchestrator | Saturday 12 July 2025 13:47:41 +0000 (0:00:01.761) 0:04:59.168 ********* 2025-07-12 13:54:33.091812 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.091819 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.091825 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.091831 | orchestrator | 2025-07-12 13:54:33.091837 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-07-12 13:54:33.091847 | orchestrator | Saturday 12 July 2025 13:47:43 +0000 (0:00:02.073) 0:05:01.241 ********* 2025-07-12 13:54:33.091853 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.091860 | orchestrator | 2025-07-12 13:54:33.091866 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-07-12 13:54:33.091872 | orchestrator | Saturday 12 July 2025 13:47:44 +0000 (0:00:00.820) 0:05:02.062 ********* 2025-07-12 13:54:33.091878 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-07-12 13:54:33.091884 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.091890 | orchestrator | 2025-07-12 13:54:33.091896 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-07-12 13:54:33.091902 | orchestrator | Saturday 12 July 2025 13:48:06 +0000 (0:00:21.928) 0:05:23.991 ********* 2025-07-12 13:54:33.091909 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.091915 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.091921 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.091927 | orchestrator | 2025-07-12 13:54:33.091933 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-07-12 13:54:33.091939 | orchestrator | Saturday 12 July 2025 13:48:17 +0000 (0:00:10.876) 0:05:34.867 ********* 2025-07-12 13:54:33.091945 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.091951 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.091958 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.091964 | orchestrator | 2025-07-12 13:54:33.091970 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-07-12 13:54:33.091976 | orchestrator | Saturday 12 July 2025 13:48:17 +0000 (0:00:00.304) 0:05:35.171 ********* 2025-07-12 13:54:33.092001 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e0ce4e470ac053067d692c4cb71634c6557f1d02'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-07-12 13:54:33.092010 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e0ce4e470ac053067d692c4cb71634c6557f1d02'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-07-12 13:54:33.092017 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e0ce4e470ac053067d692c4cb71634c6557f1d02'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-07-12 13:54:33.092024 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e0ce4e470ac053067d692c4cb71634c6557f1d02'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-07-12 13:54:33.092031 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e0ce4e470ac053067d692c4cb71634c6557f1d02'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-07-12 13:54:33.092042 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e0ce4e470ac053067d692c4cb71634c6557f1d02'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e0ce4e470ac053067d692c4cb71634c6557f1d02'}])  2025-07-12 13:54:33.092050 | orchestrator | 2025-07-12 13:54:33.092056 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:33.092062 | orchestrator | Saturday 12 July 2025 13:48:32 +0000 (0:00:14.965) 0:05:50.137 ********* 2025-07-12 13:54:33.092068 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092075 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092081 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092087 | orchestrator | 2025-07-12 13:54:33.092093 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 13:54:33.092099 | orchestrator | Saturday 12 July 2025 13:48:32 +0000 (0:00:00.295) 0:05:50.433 ********* 2025-07-12 13:54:33.092105 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.092112 | orchestrator | 2025-07-12 13:54:33.092121 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 13:54:33.092127 | orchestrator | Saturday 12 July 2025 13:48:33 +0000 (0:00:00.723) 0:05:51.157 ********* 2025-07-12 13:54:33.092133 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.092139 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.092146 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.092152 | orchestrator | 2025-07-12 13:54:33.092158 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 13:54:33.092164 | orchestrator | Saturday 12 July 2025 13:48:33 +0000 (0:00:00.313) 0:05:51.471 ********* 2025-07-12 13:54:33.092170 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092176 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092182 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092189 | orchestrator | 2025-07-12 13:54:33.092195 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 13:54:33.092201 | orchestrator | Saturday 12 July 2025 13:48:33 +0000 (0:00:00.331) 0:05:51.802 ********* 2025-07-12 13:54:33.092207 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:33.092213 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:33.092219 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:33.092225 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092231 | orchestrator | 2025-07-12 13:54:33.092237 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 13:54:33.092244 | orchestrator | Saturday 12 July 2025 13:48:34 +0000 (0:00:00.885) 0:05:52.687 ********* 2025-07-12 13:54:33.092250 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.092256 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.092262 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.092268 | orchestrator | 2025-07-12 13:54:33.092274 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-07-12 13:54:33.092280 | orchestrator | 2025-07-12 13:54:33.092287 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:33.092310 | orchestrator | Saturday 12 July 2025 13:48:35 +0000 (0:00:00.788) 0:05:53.476 ********* 2025-07-12 13:54:33.092318 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.092324 | orchestrator | 2025-07-12 13:54:33.092331 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:33.092343 | orchestrator | Saturday 12 July 2025 13:48:36 +0000 (0:00:00.560) 0:05:54.037 ********* 2025-07-12 13:54:33.092349 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.092355 | orchestrator | 2025-07-12 13:54:33.092362 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:33.092368 | orchestrator | Saturday 12 July 2025 13:48:36 +0000 (0:00:00.724) 0:05:54.761 ********* 2025-07-12 13:54:33.092374 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.092380 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.092386 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.092392 | orchestrator | 2025-07-12 13:54:33.092398 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:33.092404 | orchestrator | Saturday 12 July 2025 13:48:37 +0000 (0:00:00.705) 0:05:55.466 ********* 2025-07-12 13:54:33.092411 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092417 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092423 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092429 | orchestrator | 2025-07-12 13:54:33.092435 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:33.092458 | orchestrator | Saturday 12 July 2025 13:48:37 +0000 (0:00:00.328) 0:05:55.795 ********* 2025-07-12 13:54:33.092464 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092470 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092476 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092483 | orchestrator | 2025-07-12 13:54:33.092489 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:33.092495 | orchestrator | Saturday 12 July 2025 13:48:38 +0000 (0:00:00.536) 0:05:56.331 ********* 2025-07-12 13:54:33.092501 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092507 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092514 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092520 | orchestrator | 2025-07-12 13:54:33.092526 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:33.092532 | orchestrator | Saturday 12 July 2025 13:48:38 +0000 (0:00:00.301) 0:05:56.632 ********* 2025-07-12 13:54:33.092539 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.092545 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.092551 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.092557 | orchestrator | 2025-07-12 13:54:33.092563 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:33.092570 | orchestrator | Saturday 12 July 2025 13:48:39 +0000 (0:00:00.706) 0:05:57.339 ********* 2025-07-12 13:54:33.092576 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092582 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092588 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092594 | orchestrator | 2025-07-12 13:54:33.092601 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:33.092607 | orchestrator | Saturday 12 July 2025 13:48:39 +0000 (0:00:00.321) 0:05:57.660 ********* 2025-07-12 13:54:33.092613 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092619 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092625 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092631 | orchestrator | 2025-07-12 13:54:33.092638 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:33.092644 | orchestrator | Saturday 12 July 2025 13:48:40 +0000 (0:00:00.597) 0:05:58.257 ********* 2025-07-12 13:54:33.092650 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.092656 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.092662 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.092668 | orchestrator | 2025-07-12 13:54:33.092675 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:33.092684 | orchestrator | Saturday 12 July 2025 13:48:41 +0000 (0:00:00.755) 0:05:59.013 ********* 2025-07-12 13:54:33.092694 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.092701 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.092707 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.092713 | orchestrator | 2025-07-12 13:54:33.092719 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:33.092725 | orchestrator | Saturday 12 July 2025 13:48:41 +0000 (0:00:00.707) 0:05:59.721 ********* 2025-07-12 13:54:33.092732 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092738 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092744 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092750 | orchestrator | 2025-07-12 13:54:33.092756 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:33.092763 | orchestrator | Saturday 12 July 2025 13:48:42 +0000 (0:00:00.291) 0:06:00.012 ********* 2025-07-12 13:54:33.092769 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.092775 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.092781 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.092787 | orchestrator | 2025-07-12 13:54:33.092793 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:33.092800 | orchestrator | Saturday 12 July 2025 13:48:42 +0000 (0:00:00.621) 0:06:00.633 ********* 2025-07-12 13:54:33.092806 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092812 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092818 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092824 | orchestrator | 2025-07-12 13:54:33.092830 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:33.092837 | orchestrator | Saturday 12 July 2025 13:48:43 +0000 (0:00:00.310) 0:06:00.943 ********* 2025-07-12 13:54:33.092843 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092849 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092855 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092861 | orchestrator | 2025-07-12 13:54:33.092868 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:33.092892 | orchestrator | Saturday 12 July 2025 13:48:43 +0000 (0:00:00.298) 0:06:01.242 ********* 2025-07-12 13:54:33.092899 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092906 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092912 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092918 | orchestrator | 2025-07-12 13:54:33.092924 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:33.092930 | orchestrator | Saturday 12 July 2025 13:48:43 +0000 (0:00:00.281) 0:06:01.523 ********* 2025-07-12 13:54:33.092936 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092942 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092949 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092955 | orchestrator | 2025-07-12 13:54:33.092961 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:33.092967 | orchestrator | Saturday 12 July 2025 13:48:44 +0000 (0:00:00.558) 0:06:02.082 ********* 2025-07-12 13:54:33.092973 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.092979 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.092985 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.092991 | orchestrator | 2025-07-12 13:54:33.092997 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:33.093004 | orchestrator | Saturday 12 July 2025 13:48:44 +0000 (0:00:00.317) 0:06:02.399 ********* 2025-07-12 13:54:33.093010 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.093016 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.093022 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.093028 | orchestrator | 2025-07-12 13:54:33.093034 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:33.093041 | orchestrator | Saturday 12 July 2025 13:48:44 +0000 (0:00:00.321) 0:06:02.721 ********* 2025-07-12 13:54:33.093047 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.093053 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.093066 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.093072 | orchestrator | 2025-07-12 13:54:33.093078 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:33.093085 | orchestrator | Saturday 12 July 2025 13:48:45 +0000 (0:00:00.308) 0:06:03.030 ********* 2025-07-12 13:54:33.093091 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.093097 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.093103 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.093109 | orchestrator | 2025-07-12 13:54:33.093115 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-07-12 13:54:33.093121 | orchestrator | Saturday 12 July 2025 13:48:45 +0000 (0:00:00.775) 0:06:03.805 ********* 2025-07-12 13:54:33.093127 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:54:33.093134 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:33.093140 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:33.093146 | orchestrator | 2025-07-12 13:54:33.093152 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-07-12 13:54:33.093158 | orchestrator | Saturday 12 July 2025 13:48:46 +0000 (0:00:00.625) 0:06:04.430 ********* 2025-07-12 13:54:33.093164 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.093171 | orchestrator | 2025-07-12 13:54:33.093177 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-07-12 13:54:33.093183 | orchestrator | Saturday 12 July 2025 13:48:47 +0000 (0:00:00.548) 0:06:04.979 ********* 2025-07-12 13:54:33.093189 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.093195 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.093201 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.093208 | orchestrator | 2025-07-12 13:54:33.093214 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-07-12 13:54:33.093220 | orchestrator | Saturday 12 July 2025 13:48:48 +0000 (0:00:00.941) 0:06:05.921 ********* 2025-07-12 13:54:33.093226 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.093232 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.093238 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.093245 | orchestrator | 2025-07-12 13:54:33.093254 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-07-12 13:54:33.093260 | orchestrator | Saturday 12 July 2025 13:48:48 +0000 (0:00:00.320) 0:06:06.242 ********* 2025-07-12 13:54:33.093266 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:33.093272 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:33.093278 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:33.093285 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-07-12 13:54:33.093291 | orchestrator | 2025-07-12 13:54:33.093297 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-07-12 13:54:33.093303 | orchestrator | Saturday 12 July 2025 13:48:58 +0000 (0:00:10.262) 0:06:16.505 ********* 2025-07-12 13:54:33.093309 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.093315 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.093321 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.093327 | orchestrator | 2025-07-12 13:54:33.093333 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-07-12 13:54:33.093340 | orchestrator | Saturday 12 July 2025 13:48:58 +0000 (0:00:00.331) 0:06:16.836 ********* 2025-07-12 13:54:33.093346 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 13:54:33.093352 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 13:54:33.093358 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 13:54:33.093364 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 13:54:33.093370 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.093380 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.093386 | orchestrator | 2025-07-12 13:54:33.093392 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:33.093399 | orchestrator | Saturday 12 July 2025 13:49:01 +0000 (0:00:02.591) 0:06:19.428 ********* 2025-07-12 13:54:33.093422 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 13:54:33.093430 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 13:54:33.093436 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 13:54:33.093456 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 13:54:33.093463 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 13:54:33.093469 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 13:54:33.093475 | orchestrator | 2025-07-12 13:54:33.093481 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-07-12 13:54:33.093488 | orchestrator | Saturday 12 July 2025 13:49:03 +0000 (0:00:01.705) 0:06:21.134 ********* 2025-07-12 13:54:33.093494 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.093500 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.093507 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.093513 | orchestrator | 2025-07-12 13:54:33.093519 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-07-12 13:54:33.093525 | orchestrator | Saturday 12 July 2025 13:49:04 +0000 (0:00:00.735) 0:06:21.869 ********* 2025-07-12 13:54:33.093532 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.093538 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.093544 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.093550 | orchestrator | 2025-07-12 13:54:33.093556 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-07-12 13:54:33.093563 | orchestrator | Saturday 12 July 2025 13:49:04 +0000 (0:00:00.284) 0:06:22.153 ********* 2025-07-12 13:54:33.093569 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.093575 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.093581 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.093587 | orchestrator | 2025-07-12 13:54:33.093593 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-07-12 13:54:33.093600 | orchestrator | Saturday 12 July 2025 13:49:04 +0000 (0:00:00.296) 0:06:22.450 ********* 2025-07-12 13:54:33.093606 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.093612 | orchestrator | 2025-07-12 13:54:33.093618 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-07-12 13:54:33.093624 | orchestrator | Saturday 12 July 2025 13:49:05 +0000 (0:00:00.791) 0:06:23.241 ********* 2025-07-12 13:54:33.093631 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.093637 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.093643 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.093649 | orchestrator | 2025-07-12 13:54:33.093655 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-07-12 13:54:33.093662 | orchestrator | Saturday 12 July 2025 13:49:05 +0000 (0:00:00.361) 0:06:23.603 ********* 2025-07-12 13:54:33.093668 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.093674 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.093680 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.093686 | orchestrator | 2025-07-12 13:54:33.093692 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-07-12 13:54:33.093699 | orchestrator | Saturday 12 July 2025 13:49:06 +0000 (0:00:00.333) 0:06:23.936 ********* 2025-07-12 13:54:33.093705 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.093711 | orchestrator | 2025-07-12 13:54:33.093717 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-07-12 13:54:33.093723 | orchestrator | Saturday 12 July 2025 13:49:06 +0000 (0:00:00.781) 0:06:24.718 ********* 2025-07-12 13:54:33.093734 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.093740 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.093747 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.093753 | orchestrator | 2025-07-12 13:54:33.093759 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-07-12 13:54:33.093765 | orchestrator | Saturday 12 July 2025 13:49:08 +0000 (0:00:01.350) 0:06:26.068 ********* 2025-07-12 13:54:33.093771 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.093778 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.093787 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.093793 | orchestrator | 2025-07-12 13:54:33.093799 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-07-12 13:54:33.093806 | orchestrator | Saturday 12 July 2025 13:49:09 +0000 (0:00:01.042) 0:06:27.111 ********* 2025-07-12 13:54:33.093812 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.093818 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.093824 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.093830 | orchestrator | 2025-07-12 13:54:33.093836 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-07-12 13:54:33.093842 | orchestrator | Saturday 12 July 2025 13:49:11 +0000 (0:00:01.961) 0:06:29.073 ********* 2025-07-12 13:54:33.093849 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.093855 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.093861 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.093867 | orchestrator | 2025-07-12 13:54:33.093873 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-07-12 13:54:33.093879 | orchestrator | Saturday 12 July 2025 13:49:13 +0000 (0:00:01.868) 0:06:30.941 ********* 2025-07-12 13:54:33.093885 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.093891 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.093898 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-07-12 13:54:33.093904 | orchestrator | 2025-07-12 13:54:33.093910 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-07-12 13:54:33.093916 | orchestrator | Saturday 12 July 2025 13:49:13 +0000 (0:00:00.379) 0:06:31.320 ********* 2025-07-12 13:54:33.093922 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-07-12 13:54:33.093929 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-07-12 13:54:33.093953 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-07-12 13:54:33.093960 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-07-12 13:54:33.093966 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-07-12 13:54:33.093972 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:33.093979 | orchestrator | 2025-07-12 13:54:33.093985 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-07-12 13:54:33.093991 | orchestrator | Saturday 12 July 2025 13:49:43 +0000 (0:00:30.201) 0:07:01.522 ********* 2025-07-12 13:54:33.093997 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:33.094003 | orchestrator | 2025-07-12 13:54:33.094009 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-07-12 13:54:33.094040 | orchestrator | Saturday 12 July 2025 13:49:45 +0000 (0:00:01.475) 0:07:02.998 ********* 2025-07-12 13:54:33.094047 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.094053 | orchestrator | 2025-07-12 13:54:33.094059 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-07-12 13:54:33.094065 | orchestrator | Saturday 12 July 2025 13:49:45 +0000 (0:00:00.836) 0:07:03.834 ********* 2025-07-12 13:54:33.094072 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.094083 | orchestrator | 2025-07-12 13:54:33.094089 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-07-12 13:54:33.094095 | orchestrator | Saturday 12 July 2025 13:49:46 +0000 (0:00:00.194) 0:07:04.028 ********* 2025-07-12 13:54:33.094102 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-07-12 13:54:33.094108 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-07-12 13:54:33.094114 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-07-12 13:54:33.094120 | orchestrator | 2025-07-12 13:54:33.094127 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-07-12 13:54:33.094133 | orchestrator | Saturday 12 July 2025 13:49:52 +0000 (0:00:06.338) 0:07:10.367 ********* 2025-07-12 13:54:33.094139 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-07-12 13:54:33.094145 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-07-12 13:54:33.094152 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-07-12 13:54:33.094158 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-07-12 13:54:33.094164 | orchestrator | 2025-07-12 13:54:33.094170 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:33.094176 | orchestrator | Saturday 12 July 2025 13:49:57 +0000 (0:00:04.733) 0:07:15.101 ********* 2025-07-12 13:54:33.094183 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.094189 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.094195 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.094201 | orchestrator | 2025-07-12 13:54:33.094207 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 13:54:33.094214 | orchestrator | Saturday 12 July 2025 13:49:58 +0000 (0:00:01.005) 0:07:16.107 ********* 2025-07-12 13:54:33.094220 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:54:33.094226 | orchestrator | 2025-07-12 13:54:33.094232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 13:54:33.094239 | orchestrator | Saturday 12 July 2025 13:49:58 +0000 (0:00:00.542) 0:07:16.649 ********* 2025-07-12 13:54:33.094245 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.094251 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.094257 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.094264 | orchestrator | 2025-07-12 13:54:33.094270 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 13:54:33.094280 | orchestrator | Saturday 12 July 2025 13:49:59 +0000 (0:00:00.302) 0:07:16.951 ********* 2025-07-12 13:54:33.094286 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.094292 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.094298 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.094305 | orchestrator | 2025-07-12 13:54:33.094311 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 13:54:33.094317 | orchestrator | Saturday 12 July 2025 13:50:00 +0000 (0:00:01.372) 0:07:18.324 ********* 2025-07-12 13:54:33.094323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 13:54:33.094329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 13:54:33.094336 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 13:54:33.094342 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.094348 | orchestrator | 2025-07-12 13:54:33.094354 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 13:54:33.094361 | orchestrator | Saturday 12 July 2025 13:50:01 +0000 (0:00:00.660) 0:07:18.984 ********* 2025-07-12 13:54:33.094367 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.094373 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.094379 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.094386 | orchestrator | 2025-07-12 13:54:33.094392 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-07-12 13:54:33.094402 | orchestrator | 2025-07-12 13:54:33.094408 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:33.094414 | orchestrator | Saturday 12 July 2025 13:50:01 +0000 (0:00:00.604) 0:07:19.589 ********* 2025-07-12 13:54:33.094421 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.094427 | orchestrator | 2025-07-12 13:54:33.094433 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:33.094491 | orchestrator | Saturday 12 July 2025 13:50:02 +0000 (0:00:00.762) 0:07:20.351 ********* 2025-07-12 13:54:33.094500 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.094507 | orchestrator | 2025-07-12 13:54:33.094513 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:33.094519 | orchestrator | Saturday 12 July 2025 13:50:03 +0000 (0:00:00.522) 0:07:20.874 ********* 2025-07-12 13:54:33.094525 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.094532 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.094538 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.094544 | orchestrator | 2025-07-12 13:54:33.094550 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:33.094556 | orchestrator | Saturday 12 July 2025 13:50:03 +0000 (0:00:00.341) 0:07:21.215 ********* 2025-07-12 13:54:33.094563 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.094569 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.094575 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.094581 | orchestrator | 2025-07-12 13:54:33.094587 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:33.094593 | orchestrator | Saturday 12 July 2025 13:50:04 +0000 (0:00:00.990) 0:07:22.206 ********* 2025-07-12 13:54:33.094599 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.094606 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.094612 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.094618 | orchestrator | 2025-07-12 13:54:33.094624 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:33.094630 | orchestrator | Saturday 12 July 2025 13:50:05 +0000 (0:00:00.676) 0:07:22.883 ********* 2025-07-12 13:54:33.094637 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.094643 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.094649 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.094655 | orchestrator | 2025-07-12 13:54:33.094661 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:33.094667 | orchestrator | Saturday 12 July 2025 13:50:05 +0000 (0:00:00.672) 0:07:23.555 ********* 2025-07-12 13:54:33.094674 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.094680 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.094686 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.094692 | orchestrator | 2025-07-12 13:54:33.094698 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:33.094705 | orchestrator | Saturday 12 July 2025 13:50:05 +0000 (0:00:00.305) 0:07:23.861 ********* 2025-07-12 13:54:33.094711 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.094717 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.094723 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.094729 | orchestrator | 2025-07-12 13:54:33.094735 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:33.094742 | orchestrator | Saturday 12 July 2025 13:50:06 +0000 (0:00:00.587) 0:07:24.448 ********* 2025-07-12 13:54:33.094748 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.094753 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.094758 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.094764 | orchestrator | 2025-07-12 13:54:33.094769 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:33.094779 | orchestrator | Saturday 12 July 2025 13:50:06 +0000 (0:00:00.305) 0:07:24.754 ********* 2025-07-12 13:54:33.094784 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.094790 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.094795 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.094801 | orchestrator | 2025-07-12 13:54:33.094806 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:33.094812 | orchestrator | Saturday 12 July 2025 13:50:07 +0000 (0:00:00.659) 0:07:25.414 ********* 2025-07-12 13:54:33.094817 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.094822 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.094828 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.094833 | orchestrator | 2025-07-12 13:54:33.094839 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:33.094844 | orchestrator | Saturday 12 July 2025 13:50:08 +0000 (0:00:00.747) 0:07:26.161 ********* 2025-07-12 13:54:33.094853 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.094858 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.094864 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.094869 | orchestrator | 2025-07-12 13:54:33.094874 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:33.094880 | orchestrator | Saturday 12 July 2025 13:50:08 +0000 (0:00:00.553) 0:07:26.714 ********* 2025-07-12 13:54:33.094885 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.094891 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.094896 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.094901 | orchestrator | 2025-07-12 13:54:33.094907 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:33.094912 | orchestrator | Saturday 12 July 2025 13:50:09 +0000 (0:00:00.299) 0:07:27.013 ********* 2025-07-12 13:54:33.094918 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.094923 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.094928 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.094934 | orchestrator | 2025-07-12 13:54:33.094939 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:33.094945 | orchestrator | Saturday 12 July 2025 13:50:09 +0000 (0:00:00.323) 0:07:27.337 ********* 2025-07-12 13:54:33.094950 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.094955 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.094961 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.094966 | orchestrator | 2025-07-12 13:54:33.094972 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:33.094977 | orchestrator | Saturday 12 July 2025 13:50:09 +0000 (0:00:00.350) 0:07:27.687 ********* 2025-07-12 13:54:33.094983 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.094988 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.094993 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.094999 | orchestrator | 2025-07-12 13:54:33.095004 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:33.095010 | orchestrator | Saturday 12 July 2025 13:50:10 +0000 (0:00:00.582) 0:07:28.270 ********* 2025-07-12 13:54:33.095018 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.095023 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.095029 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.095034 | orchestrator | 2025-07-12 13:54:33.095039 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:33.095045 | orchestrator | Saturday 12 July 2025 13:50:10 +0000 (0:00:00.332) 0:07:28.602 ********* 2025-07-12 13:54:33.095050 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.095056 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.095061 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.095067 | orchestrator | 2025-07-12 13:54:33.095072 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:33.095078 | orchestrator | Saturday 12 July 2025 13:50:11 +0000 (0:00:00.293) 0:07:28.896 ********* 2025-07-12 13:54:33.095087 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.095092 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.095098 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.095103 | orchestrator | 2025-07-12 13:54:33.095109 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:33.095114 | orchestrator | Saturday 12 July 2025 13:50:11 +0000 (0:00:00.294) 0:07:29.190 ********* 2025-07-12 13:54:33.095120 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.095125 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.095130 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.095136 | orchestrator | 2025-07-12 13:54:33.095141 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:33.095147 | orchestrator | Saturday 12 July 2025 13:50:11 +0000 (0:00:00.604) 0:07:29.795 ********* 2025-07-12 13:54:33.095152 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.095157 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.095163 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.095168 | orchestrator | 2025-07-12 13:54:33.095174 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-07-12 13:54:33.095179 | orchestrator | Saturday 12 July 2025 13:50:12 +0000 (0:00:00.536) 0:07:30.332 ********* 2025-07-12 13:54:33.095184 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.095190 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.095195 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.095201 | orchestrator | 2025-07-12 13:54:33.095206 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-07-12 13:54:33.095211 | orchestrator | Saturday 12 July 2025 13:50:12 +0000 (0:00:00.311) 0:07:30.644 ********* 2025-07-12 13:54:33.095217 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:54:33.095222 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:54:33.095228 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:54:33.095233 | orchestrator | 2025-07-12 13:54:33.095238 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-07-12 13:54:33.095244 | orchestrator | Saturday 12 July 2025 13:50:13 +0000 (0:00:00.854) 0:07:31.498 ********* 2025-07-12 13:54:33.095249 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.095255 | orchestrator | 2025-07-12 13:54:33.095260 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-07-12 13:54:33.095265 | orchestrator | Saturday 12 July 2025 13:50:14 +0000 (0:00:00.800) 0:07:32.298 ********* 2025-07-12 13:54:33.095271 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.095276 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.095281 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.095287 | orchestrator | 2025-07-12 13:54:33.095292 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-07-12 13:54:33.095298 | orchestrator | Saturday 12 July 2025 13:50:14 +0000 (0:00:00.295) 0:07:32.594 ********* 2025-07-12 13:54:33.095303 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.095308 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.095314 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.095319 | orchestrator | 2025-07-12 13:54:33.095327 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-07-12 13:54:33.095333 | orchestrator | Saturday 12 July 2025 13:50:15 +0000 (0:00:00.294) 0:07:32.889 ********* 2025-07-12 13:54:33.095338 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.095343 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.095349 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.095354 | orchestrator | 2025-07-12 13:54:33.095360 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-07-12 13:54:33.095365 | orchestrator | Saturday 12 July 2025 13:50:15 +0000 (0:00:00.853) 0:07:33.742 ********* 2025-07-12 13:54:33.095376 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.095381 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.095387 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.095392 | orchestrator | 2025-07-12 13:54:33.095398 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-07-12 13:54:33.095403 | orchestrator | Saturday 12 July 2025 13:50:16 +0000 (0:00:00.350) 0:07:34.093 ********* 2025-07-12 13:54:33.095408 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 13:54:33.095414 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 13:54:33.095419 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 13:54:33.095425 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 13:54:33.095430 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 13:54:33.095435 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 13:54:33.095451 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 13:54:33.095459 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 13:54:33.095465 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 13:54:33.095470 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 13:54:33.095476 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 13:54:33.095481 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 13:54:33.095487 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 13:54:33.095492 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 13:54:33.095497 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 13:54:33.095503 | orchestrator | 2025-07-12 13:54:33.095508 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-07-12 13:54:33.095514 | orchestrator | Saturday 12 July 2025 13:50:19 +0000 (0:00:03.102) 0:07:37.195 ********* 2025-07-12 13:54:33.095519 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.095525 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.095530 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.095535 | orchestrator | 2025-07-12 13:54:33.095541 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-07-12 13:54:33.095546 | orchestrator | Saturday 12 July 2025 13:50:19 +0000 (0:00:00.306) 0:07:37.501 ********* 2025-07-12 13:54:33.095552 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.095557 | orchestrator | 2025-07-12 13:54:33.095563 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-07-12 13:54:33.095568 | orchestrator | Saturday 12 July 2025 13:50:20 +0000 (0:00:00.737) 0:07:38.239 ********* 2025-07-12 13:54:33.095574 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 13:54:33.095579 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 13:54:33.095585 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 13:54:33.095590 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-07-12 13:54:33.095595 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-07-12 13:54:33.095601 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-07-12 13:54:33.095606 | orchestrator | 2025-07-12 13:54:33.095612 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-07-12 13:54:33.095621 | orchestrator | Saturday 12 July 2025 13:50:21 +0000 (0:00:00.959) 0:07:39.198 ********* 2025-07-12 13:54:33.095627 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.095632 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:33.095637 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:33.095643 | orchestrator | 2025-07-12 13:54:33.095648 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:33.095654 | orchestrator | Saturday 12 July 2025 13:50:23 +0000 (0:00:01.927) 0:07:41.125 ********* 2025-07-12 13:54:33.095659 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:54:33.095664 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:33.095670 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.095675 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:54:33.095681 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 13:54:33.095686 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.095691 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:54:33.095697 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 13:54:33.095705 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.095711 | orchestrator | 2025-07-12 13:54:33.095716 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-07-12 13:54:33.095722 | orchestrator | Saturday 12 July 2025 13:50:24 +0000 (0:00:01.116) 0:07:42.242 ********* 2025-07-12 13:54:33.095727 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:33.095732 | orchestrator | 2025-07-12 13:54:33.095738 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-07-12 13:54:33.095743 | orchestrator | Saturday 12 July 2025 13:50:26 +0000 (0:00:02.558) 0:07:44.800 ********* 2025-07-12 13:54:33.095749 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.095754 | orchestrator | 2025-07-12 13:54:33.095759 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-07-12 13:54:33.095765 | orchestrator | Saturday 12 July 2025 13:50:27 +0000 (0:00:00.585) 0:07:45.386 ********* 2025-07-12 13:54:33.095770 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2177925c-0e94-5467-9f04-b37733dbe47a', 'data_vg': 'ceph-2177925c-0e94-5467-9f04-b37733dbe47a'}) 2025-07-12 13:54:33.095776 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-09698b4c-8482-58a0-ad33-d3500ef3a9f7', 'data_vg': 'ceph-09698b4c-8482-58a0-ad33-d3500ef3a9f7'}) 2025-07-12 13:54:33.095782 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f88c8806-82e1-5c41-a829-e62dc4a8fdb6', 'data_vg': 'ceph-f88c8806-82e1-5c41-a829-e62dc4a8fdb6'}) 2025-07-12 13:54:33.095790 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-10b3d195-009d-5006-b5f6-1b7aa1316d97', 'data_vg': 'ceph-10b3d195-009d-5006-b5f6-1b7aa1316d97'}) 2025-07-12 13:54:33.095796 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fbedf305-2fae-5605-926c-96a21a5245d1', 'data_vg': 'ceph-fbedf305-2fae-5605-926c-96a21a5245d1'}) 2025-07-12 13:54:33.095802 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f35471dc-23d0-5222-b540-93882fae0f69', 'data_vg': 'ceph-f35471dc-23d0-5222-b540-93882fae0f69'}) 2025-07-12 13:54:33.095807 | orchestrator | 2025-07-12 13:54:33.095813 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-07-12 13:54:33.095818 | orchestrator | Saturday 12 July 2025 13:51:08 +0000 (0:00:41.193) 0:08:26.580 ********* 2025-07-12 13:54:33.095824 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.095829 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.095835 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.095840 | orchestrator | 2025-07-12 13:54:33.095846 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-07-12 13:54:33.095855 | orchestrator | Saturday 12 July 2025 13:51:09 +0000 (0:00:00.529) 0:08:27.109 ********* 2025-07-12 13:54:33.095860 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.095866 | orchestrator | 2025-07-12 13:54:33.095871 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-07-12 13:54:33.095877 | orchestrator | Saturday 12 July 2025 13:51:09 +0000 (0:00:00.566) 0:08:27.675 ********* 2025-07-12 13:54:33.095882 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.095888 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.095893 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.095898 | orchestrator | 2025-07-12 13:54:33.095904 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-07-12 13:54:33.095909 | orchestrator | Saturday 12 July 2025 13:51:10 +0000 (0:00:00.655) 0:08:28.331 ********* 2025-07-12 13:54:33.095915 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.095920 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.095925 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.095931 | orchestrator | 2025-07-12 13:54:33.095936 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-07-12 13:54:33.095942 | orchestrator | Saturday 12 July 2025 13:51:13 +0000 (0:00:02.770) 0:08:31.101 ********* 2025-07-12 13:54:33.095947 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.095952 | orchestrator | 2025-07-12 13:54:33.095958 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-07-12 13:54:33.095963 | orchestrator | Saturday 12 July 2025 13:51:13 +0000 (0:00:00.544) 0:08:31.645 ********* 2025-07-12 13:54:33.095969 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.095974 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.095980 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.095985 | orchestrator | 2025-07-12 13:54:33.095990 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-07-12 13:54:33.095996 | orchestrator | Saturday 12 July 2025 13:51:14 +0000 (0:00:01.142) 0:08:32.788 ********* 2025-07-12 13:54:33.096001 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.096007 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.096012 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.096017 | orchestrator | 2025-07-12 13:54:33.096023 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-07-12 13:54:33.096028 | orchestrator | Saturday 12 July 2025 13:51:16 +0000 (0:00:01.398) 0:08:34.186 ********* 2025-07-12 13:54:33.096034 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.096039 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.096044 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.096050 | orchestrator | 2025-07-12 13:54:33.096055 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-07-12 13:54:33.096060 | orchestrator | Saturday 12 July 2025 13:51:18 +0000 (0:00:01.704) 0:08:35.890 ********* 2025-07-12 13:54:33.096066 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096071 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096080 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.096085 | orchestrator | 2025-07-12 13:54:33.096091 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-07-12 13:54:33.096096 | orchestrator | Saturday 12 July 2025 13:51:18 +0000 (0:00:00.342) 0:08:36.233 ********* 2025-07-12 13:54:33.096102 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096107 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096112 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.096118 | orchestrator | 2025-07-12 13:54:33.096123 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-07-12 13:54:33.096128 | orchestrator | Saturday 12 July 2025 13:51:18 +0000 (0:00:00.314) 0:08:36.547 ********* 2025-07-12 13:54:33.096134 | orchestrator | ok: [testbed-node-3] => (item=2) 2025-07-12 13:54:33.096143 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-07-12 13:54:33.096148 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-07-12 13:54:33.096154 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-07-12 13:54:33.096159 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-07-12 13:54:33.096164 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 13:54:33.096170 | orchestrator | 2025-07-12 13:54:33.096175 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-07-12 13:54:33.096181 | orchestrator | Saturday 12 July 2025 13:51:20 +0000 (0:00:01.377) 0:08:37.925 ********* 2025-07-12 13:54:33.096186 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-07-12 13:54:33.096191 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-12 13:54:33.096197 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-07-12 13:54:33.096202 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-12 13:54:33.096208 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-07-12 13:54:33.096213 | orchestrator | changed: [testbed-node-5] => (item=0) 2025-07-12 13:54:33.096218 | orchestrator | 2025-07-12 13:54:33.096224 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-07-12 13:54:33.096232 | orchestrator | Saturday 12 July 2025 13:51:22 +0000 (0:00:02.046) 0:08:39.971 ********* 2025-07-12 13:54:33.096237 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-07-12 13:54:33.096243 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-12 13:54:33.096248 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-07-12 13:54:33.096254 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-07-12 13:54:33.096259 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-12 13:54:33.096264 | orchestrator | changed: [testbed-node-5] => (item=0) 2025-07-12 13:54:33.096270 | orchestrator | 2025-07-12 13:54:33.096275 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-07-12 13:54:33.096281 | orchestrator | Saturday 12 July 2025 13:51:25 +0000 (0:00:03.492) 0:08:43.464 ********* 2025-07-12 13:54:33.096286 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096291 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096297 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:33.096302 | orchestrator | 2025-07-12 13:54:33.096308 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-07-12 13:54:33.096313 | orchestrator | Saturday 12 July 2025 13:51:28 +0000 (0:00:02.691) 0:08:46.155 ********* 2025-07-12 13:54:33.096318 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096324 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096329 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-07-12 13:54:33.096335 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:33.096340 | orchestrator | 2025-07-12 13:54:33.096346 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-07-12 13:54:33.096351 | orchestrator | Saturday 12 July 2025 13:51:41 +0000 (0:00:12.987) 0:08:59.142 ********* 2025-07-12 13:54:33.096357 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096362 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096367 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.096373 | orchestrator | 2025-07-12 13:54:33.096378 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:33.096384 | orchestrator | Saturday 12 July 2025 13:51:42 +0000 (0:00:00.856) 0:08:59.999 ********* 2025-07-12 13:54:33.096389 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096394 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096400 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.096405 | orchestrator | 2025-07-12 13:54:33.096411 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 13:54:33.096416 | orchestrator | Saturday 12 July 2025 13:51:42 +0000 (0:00:00.557) 0:09:00.557 ********* 2025-07-12 13:54:33.096421 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.096432 | orchestrator | 2025-07-12 13:54:33.096437 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 13:54:33.096456 | orchestrator | Saturday 12 July 2025 13:51:43 +0000 (0:00:00.524) 0:09:01.081 ********* 2025-07-12 13:54:33.096462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.096467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.096473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.096478 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096484 | orchestrator | 2025-07-12 13:54:33.096489 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 13:54:33.096495 | orchestrator | Saturday 12 July 2025 13:51:43 +0000 (0:00:00.375) 0:09:01.457 ********* 2025-07-12 13:54:33.096500 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096506 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096511 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.096516 | orchestrator | 2025-07-12 13:54:33.096522 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 13:54:33.096527 | orchestrator | Saturday 12 July 2025 13:51:43 +0000 (0:00:00.309) 0:09:01.766 ********* 2025-07-12 13:54:33.096533 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096538 | orchestrator | 2025-07-12 13:54:33.096546 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 13:54:33.096552 | orchestrator | Saturday 12 July 2025 13:51:44 +0000 (0:00:00.214) 0:09:01.981 ********* 2025-07-12 13:54:33.096557 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096563 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096568 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.096574 | orchestrator | 2025-07-12 13:54:33.096579 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 13:54:33.096584 | orchestrator | Saturday 12 July 2025 13:51:44 +0000 (0:00:00.558) 0:09:02.539 ********* 2025-07-12 13:54:33.096590 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096595 | orchestrator | 2025-07-12 13:54:33.096601 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 13:54:33.096606 | orchestrator | Saturday 12 July 2025 13:51:44 +0000 (0:00:00.224) 0:09:02.764 ********* 2025-07-12 13:54:33.096612 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096617 | orchestrator | 2025-07-12 13:54:33.096623 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 13:54:33.096628 | orchestrator | Saturday 12 July 2025 13:51:45 +0000 (0:00:00.226) 0:09:02.990 ********* 2025-07-12 13:54:33.096633 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096639 | orchestrator | 2025-07-12 13:54:33.096644 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 13:54:33.096650 | orchestrator | Saturday 12 July 2025 13:51:45 +0000 (0:00:00.134) 0:09:03.125 ********* 2025-07-12 13:54:33.096655 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096660 | orchestrator | 2025-07-12 13:54:33.096666 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 13:54:33.096671 | orchestrator | Saturday 12 July 2025 13:51:45 +0000 (0:00:00.225) 0:09:03.351 ********* 2025-07-12 13:54:33.096677 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096682 | orchestrator | 2025-07-12 13:54:33.096690 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 13:54:33.096696 | orchestrator | Saturday 12 July 2025 13:51:45 +0000 (0:00:00.233) 0:09:03.584 ********* 2025-07-12 13:54:33.096701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.096707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.096712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.096717 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096727 | orchestrator | 2025-07-12 13:54:33.096732 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 13:54:33.096738 | orchestrator | Saturday 12 July 2025 13:51:46 +0000 (0:00:00.387) 0:09:03.971 ********* 2025-07-12 13:54:33.096743 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096748 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096754 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.096759 | orchestrator | 2025-07-12 13:54:33.096765 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 13:54:33.096770 | orchestrator | Saturday 12 July 2025 13:51:46 +0000 (0:00:00.299) 0:09:04.271 ********* 2025-07-12 13:54:33.096775 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096781 | orchestrator | 2025-07-12 13:54:33.096786 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 13:54:33.096792 | orchestrator | Saturday 12 July 2025 13:51:47 +0000 (0:00:00.800) 0:09:05.072 ********* 2025-07-12 13:54:33.096797 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096802 | orchestrator | 2025-07-12 13:54:33.096808 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-07-12 13:54:33.096813 | orchestrator | 2025-07-12 13:54:33.096819 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:33.096824 | orchestrator | Saturday 12 July 2025 13:51:47 +0000 (0:00:00.708) 0:09:05.780 ********* 2025-07-12 13:54:33.096830 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.096835 | orchestrator | 2025-07-12 13:54:33.096841 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:33.096846 | orchestrator | Saturday 12 July 2025 13:51:49 +0000 (0:00:01.231) 0:09:07.011 ********* 2025-07-12 13:54:33.096852 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.096857 | orchestrator | 2025-07-12 13:54:33.096862 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:33.096868 | orchestrator | Saturday 12 July 2025 13:51:50 +0000 (0:00:01.184) 0:09:08.196 ********* 2025-07-12 13:54:33.096873 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.096879 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.096884 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.096889 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.096895 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.096900 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.096906 | orchestrator | 2025-07-12 13:54:33.096911 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:33.096916 | orchestrator | Saturday 12 July 2025 13:51:51 +0000 (0:00:00.880) 0:09:09.077 ********* 2025-07-12 13:54:33.096922 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.096927 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.096933 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.096938 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.096944 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.096949 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.096954 | orchestrator | 2025-07-12 13:54:33.096960 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:33.096965 | orchestrator | Saturday 12 July 2025 13:51:52 +0000 (0:00:01.076) 0:09:10.153 ********* 2025-07-12 13:54:33.096971 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.096976 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.096984 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.096990 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.096995 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.097001 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.097006 | orchestrator | 2025-07-12 13:54:33.097012 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:33.097021 | orchestrator | Saturday 12 July 2025 13:51:53 +0000 (0:00:01.217) 0:09:11.370 ********* 2025-07-12 13:54:33.097026 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.097032 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.097037 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.097042 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.097048 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.097053 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.097059 | orchestrator | 2025-07-12 13:54:33.097064 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:33.097070 | orchestrator | Saturday 12 July 2025 13:51:54 +0000 (0:00:01.040) 0:09:12.411 ********* 2025-07-12 13:54:33.097075 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.097081 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.097086 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.097091 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.097097 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.097102 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.097108 | orchestrator | 2025-07-12 13:54:33.097113 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:33.097118 | orchestrator | Saturday 12 July 2025 13:51:55 +0000 (0:00:00.904) 0:09:13.315 ********* 2025-07-12 13:54:33.097124 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.097129 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.097135 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.097140 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.097146 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.097151 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.097156 | orchestrator | 2025-07-12 13:54:33.097164 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:33.097170 | orchestrator | Saturday 12 July 2025 13:51:56 +0000 (0:00:00.612) 0:09:13.927 ********* 2025-07-12 13:54:33.097175 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.097181 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.097186 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.097191 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.097197 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.097202 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.097207 | orchestrator | 2025-07-12 13:54:33.097213 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:33.097218 | orchestrator | Saturday 12 July 2025 13:51:56 +0000 (0:00:00.840) 0:09:14.768 ********* 2025-07-12 13:54:33.097224 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.097229 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.097235 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.097240 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.097245 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.097251 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.097256 | orchestrator | 2025-07-12 13:54:33.097262 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:33.097267 | orchestrator | Saturday 12 July 2025 13:51:58 +0000 (0:00:01.170) 0:09:15.938 ********* 2025-07-12 13:54:33.097273 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.097278 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.097283 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.097289 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.097294 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.097299 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.097305 | orchestrator | 2025-07-12 13:54:33.097310 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:33.097316 | orchestrator | Saturday 12 July 2025 13:51:59 +0000 (0:00:01.386) 0:09:17.324 ********* 2025-07-12 13:54:33.097321 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.097327 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.097336 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.097341 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.097346 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.097352 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.097357 | orchestrator | 2025-07-12 13:54:33.097362 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:33.097368 | orchestrator | Saturday 12 July 2025 13:52:00 +0000 (0:00:00.624) 0:09:17.948 ********* 2025-07-12 13:54:33.097373 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.097379 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.097384 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.097389 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.097395 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.097400 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.097405 | orchestrator | 2025-07-12 13:54:33.097411 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:33.097416 | orchestrator | Saturday 12 July 2025 13:52:00 +0000 (0:00:00.823) 0:09:18.772 ********* 2025-07-12 13:54:33.097422 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.097427 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.097433 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.097451 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.097456 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.097462 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.097467 | orchestrator | 2025-07-12 13:54:33.097473 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:33.097478 | orchestrator | Saturday 12 July 2025 13:52:01 +0000 (0:00:00.625) 0:09:19.398 ********* 2025-07-12 13:54:33.097484 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.097489 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.097495 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.097500 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.097505 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.097511 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.097516 | orchestrator | 2025-07-12 13:54:33.097522 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:33.097527 | orchestrator | Saturday 12 July 2025 13:52:02 +0000 (0:00:00.819) 0:09:20.217 ********* 2025-07-12 13:54:33.097533 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.097541 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.097547 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.097552 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.097558 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.097563 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.097569 | orchestrator | 2025-07-12 13:54:33.097574 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:33.097580 | orchestrator | Saturday 12 July 2025 13:52:02 +0000 (0:00:00.631) 0:09:20.848 ********* 2025-07-12 13:54:33.097585 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.097590 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.097596 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.097601 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.097607 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.097612 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.097617 | orchestrator | 2025-07-12 13:54:33.097623 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:33.097628 | orchestrator | Saturday 12 July 2025 13:52:03 +0000 (0:00:00.834) 0:09:21.683 ********* 2025-07-12 13:54:33.097634 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:54:33.097639 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:54:33.097644 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:54:33.097650 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.097655 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.097664 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.097670 | orchestrator | 2025-07-12 13:54:33.097675 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:33.097681 | orchestrator | Saturday 12 July 2025 13:52:04 +0000 (0:00:00.585) 0:09:22.269 ********* 2025-07-12 13:54:33.097686 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.097692 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.097697 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.097703 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.097708 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.097714 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.097719 | orchestrator | 2025-07-12 13:54:33.097727 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:33.097733 | orchestrator | Saturday 12 July 2025 13:52:05 +0000 (0:00:00.811) 0:09:23.080 ********* 2025-07-12 13:54:33.097738 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.097744 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.097749 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.097754 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.097760 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.097765 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.097770 | orchestrator | 2025-07-12 13:54:33.097776 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:33.097781 | orchestrator | Saturday 12 July 2025 13:52:05 +0000 (0:00:00.647) 0:09:23.727 ********* 2025-07-12 13:54:33.097787 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.097792 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.097798 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.097803 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.097808 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.097814 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.097819 | orchestrator | 2025-07-12 13:54:33.097824 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-07-12 13:54:33.097830 | orchestrator | Saturday 12 July 2025 13:52:07 +0000 (0:00:01.213) 0:09:24.941 ********* 2025-07-12 13:54:33.097835 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.097841 | orchestrator | 2025-07-12 13:54:33.097846 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-07-12 13:54:33.097852 | orchestrator | Saturday 12 July 2025 13:52:10 +0000 (0:00:03.882) 0:09:28.823 ********* 2025-07-12 13:54:33.097857 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.097862 | orchestrator | 2025-07-12 13:54:33.097868 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-07-12 13:54:33.097873 | orchestrator | Saturday 12 July 2025 13:52:12 +0000 (0:00:01.977) 0:09:30.800 ********* 2025-07-12 13:54:33.097879 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.097884 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.097889 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.097895 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.097900 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.097906 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.097911 | orchestrator | 2025-07-12 13:54:33.097917 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-07-12 13:54:33.097922 | orchestrator | Saturday 12 July 2025 13:52:14 +0000 (0:00:01.808) 0:09:32.609 ********* 2025-07-12 13:54:33.097927 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.097933 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.097938 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.097944 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.097949 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.097954 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.097960 | orchestrator | 2025-07-12 13:54:33.097965 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-07-12 13:54:33.097971 | orchestrator | Saturday 12 July 2025 13:52:15 +0000 (0:00:01.070) 0:09:33.679 ********* 2025-07-12 13:54:33.097980 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.097986 | orchestrator | 2025-07-12 13:54:33.097991 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-07-12 13:54:33.097997 | orchestrator | Saturday 12 July 2025 13:52:17 +0000 (0:00:01.306) 0:09:34.986 ********* 2025-07-12 13:54:33.098002 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.098008 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.098028 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.098035 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.098041 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.098046 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.098051 | orchestrator | 2025-07-12 13:54:33.098057 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-07-12 13:54:33.098062 | orchestrator | Saturday 12 July 2025 13:52:19 +0000 (0:00:02.122) 0:09:37.108 ********* 2025-07-12 13:54:33.098067 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.098078 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.098083 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.098089 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.098094 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.098100 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.098105 | orchestrator | 2025-07-12 13:54:33.098110 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-07-12 13:54:33.098116 | orchestrator | Saturday 12 July 2025 13:52:22 +0000 (0:00:03.213) 0:09:40.322 ********* 2025-07-12 13:54:33.098122 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.098127 | orchestrator | 2025-07-12 13:54:33.098133 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-07-12 13:54:33.098138 | orchestrator | Saturday 12 July 2025 13:52:23 +0000 (0:00:01.342) 0:09:41.664 ********* 2025-07-12 13:54:33.098143 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.098149 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.098154 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.098160 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098165 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098171 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098176 | orchestrator | 2025-07-12 13:54:33.098181 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-07-12 13:54:33.098187 | orchestrator | Saturday 12 July 2025 13:52:24 +0000 (0:00:00.875) 0:09:42.540 ********* 2025-07-12 13:54:33.098192 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:54:33.098198 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:54:33.098203 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:54:33.098209 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.098214 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.098219 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.098225 | orchestrator | 2025-07-12 13:54:33.098230 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-07-12 13:54:33.098239 | orchestrator | Saturday 12 July 2025 13:52:26 +0000 (0:00:02.228) 0:09:44.768 ********* 2025-07-12 13:54:33.098245 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:54:33.098250 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:54:33.098256 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:54:33.098261 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098267 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098272 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098277 | orchestrator | 2025-07-12 13:54:33.098283 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-07-12 13:54:33.098288 | orchestrator | 2025-07-12 13:54:33.098294 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:33.098299 | orchestrator | Saturday 12 July 2025 13:52:28 +0000 (0:00:01.429) 0:09:46.197 ********* 2025-07-12 13:54:33.098308 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.098313 | orchestrator | 2025-07-12 13:54:33.098319 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:33.098324 | orchestrator | Saturday 12 July 2025 13:52:28 +0000 (0:00:00.584) 0:09:46.781 ********* 2025-07-12 13:54:33.098330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.098335 | orchestrator | 2025-07-12 13:54:33.098341 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:33.098346 | orchestrator | Saturday 12 July 2025 13:52:29 +0000 (0:00:00.736) 0:09:47.518 ********* 2025-07-12 13:54:33.098351 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.098357 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.098362 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.098367 | orchestrator | 2025-07-12 13:54:33.098373 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:33.098378 | orchestrator | Saturday 12 July 2025 13:52:29 +0000 (0:00:00.315) 0:09:47.833 ********* 2025-07-12 13:54:33.098384 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098389 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098394 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098399 | orchestrator | 2025-07-12 13:54:33.098405 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:33.098410 | orchestrator | Saturday 12 July 2025 13:52:30 +0000 (0:00:00.773) 0:09:48.607 ********* 2025-07-12 13:54:33.098416 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098421 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098426 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098432 | orchestrator | 2025-07-12 13:54:33.098437 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:33.098468 | orchestrator | Saturday 12 July 2025 13:52:31 +0000 (0:00:01.111) 0:09:49.719 ********* 2025-07-12 13:54:33.098474 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098480 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098485 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098490 | orchestrator | 2025-07-12 13:54:33.098496 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:33.098501 | orchestrator | Saturday 12 July 2025 13:52:32 +0000 (0:00:00.748) 0:09:50.468 ********* 2025-07-12 13:54:33.098507 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.098513 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.098518 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.098523 | orchestrator | 2025-07-12 13:54:33.098529 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:33.098534 | orchestrator | Saturday 12 July 2025 13:52:32 +0000 (0:00:00.331) 0:09:50.800 ********* 2025-07-12 13:54:33.098540 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.098545 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.098551 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.098556 | orchestrator | 2025-07-12 13:54:33.098562 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:33.098567 | orchestrator | Saturday 12 July 2025 13:52:33 +0000 (0:00:00.348) 0:09:51.149 ********* 2025-07-12 13:54:33.098573 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.098581 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.098587 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.098592 | orchestrator | 2025-07-12 13:54:33.098598 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:33.098603 | orchestrator | Saturday 12 July 2025 13:52:33 +0000 (0:00:00.616) 0:09:51.765 ********* 2025-07-12 13:54:33.098609 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098618 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098624 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098629 | orchestrator | 2025-07-12 13:54:33.098635 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:33.098640 | orchestrator | Saturday 12 July 2025 13:52:34 +0000 (0:00:00.798) 0:09:52.564 ********* 2025-07-12 13:54:33.098646 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098651 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098657 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098662 | orchestrator | 2025-07-12 13:54:33.098668 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:33.098673 | orchestrator | Saturday 12 July 2025 13:52:35 +0000 (0:00:00.858) 0:09:53.423 ********* 2025-07-12 13:54:33.098679 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.098684 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.098690 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.098695 | orchestrator | 2025-07-12 13:54:33.098700 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:33.098705 | orchestrator | Saturday 12 July 2025 13:52:35 +0000 (0:00:00.384) 0:09:53.807 ********* 2025-07-12 13:54:33.098710 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.098715 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.098720 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.098725 | orchestrator | 2025-07-12 13:54:33.098730 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:33.098734 | orchestrator | Saturday 12 July 2025 13:52:36 +0000 (0:00:00.649) 0:09:54.457 ********* 2025-07-12 13:54:33.098742 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098747 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098752 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098757 | orchestrator | 2025-07-12 13:54:33.098762 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:33.098767 | orchestrator | Saturday 12 July 2025 13:52:37 +0000 (0:00:00.406) 0:09:54.864 ********* 2025-07-12 13:54:33.098771 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098779 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098787 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098794 | orchestrator | 2025-07-12 13:54:33.098803 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:33.098811 | orchestrator | Saturday 12 July 2025 13:52:37 +0000 (0:00:00.332) 0:09:55.196 ********* 2025-07-12 13:54:33.098819 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098826 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098835 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098840 | orchestrator | 2025-07-12 13:54:33.098845 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:33.098850 | orchestrator | Saturday 12 July 2025 13:52:37 +0000 (0:00:00.313) 0:09:55.509 ********* 2025-07-12 13:54:33.098855 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.098859 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.098864 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.098869 | orchestrator | 2025-07-12 13:54:33.098874 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:33.098879 | orchestrator | Saturday 12 July 2025 13:52:38 +0000 (0:00:00.609) 0:09:56.119 ********* 2025-07-12 13:54:33.098884 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.098889 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.098893 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.098898 | orchestrator | 2025-07-12 13:54:33.098903 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:33.098908 | orchestrator | Saturday 12 July 2025 13:52:38 +0000 (0:00:00.340) 0:09:56.459 ********* 2025-07-12 13:54:33.098913 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.098917 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.098922 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.098931 | orchestrator | 2025-07-12 13:54:33.098936 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:33.098941 | orchestrator | Saturday 12 July 2025 13:52:38 +0000 (0:00:00.347) 0:09:56.806 ********* 2025-07-12 13:54:33.098946 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098951 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098956 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098961 | orchestrator | 2025-07-12 13:54:33.098966 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:33.098971 | orchestrator | Saturday 12 July 2025 13:52:39 +0000 (0:00:00.384) 0:09:57.190 ********* 2025-07-12 13:54:33.098975 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.098980 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.098985 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.098990 | orchestrator | 2025-07-12 13:54:33.098995 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-07-12 13:54:33.099000 | orchestrator | Saturday 12 July 2025 13:52:40 +0000 (0:00:00.864) 0:09:58.055 ********* 2025-07-12 13:54:33.099004 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.099009 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.099014 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-07-12 13:54:33.099019 | orchestrator | 2025-07-12 13:54:33.099024 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-07-12 13:54:33.099028 | orchestrator | Saturday 12 July 2025 13:52:40 +0000 (0:00:00.432) 0:09:58.487 ********* 2025-07-12 13:54:33.099033 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:33.099038 | orchestrator | 2025-07-12 13:54:33.099043 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-07-12 13:54:33.099047 | orchestrator | Saturday 12 July 2025 13:52:43 +0000 (0:00:02.562) 0:10:01.050 ********* 2025-07-12 13:54:33.099056 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-07-12 13:54:33.099062 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.099067 | orchestrator | 2025-07-12 13:54:33.099072 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-07-12 13:54:33.099077 | orchestrator | Saturday 12 July 2025 13:52:43 +0000 (0:00:00.221) 0:10:01.271 ********* 2025-07-12 13:54:33.099082 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:54:33.099092 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:54:33.099097 | orchestrator | 2025-07-12 13:54:33.099102 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-07-12 13:54:33.099106 | orchestrator | Saturday 12 July 2025 13:52:52 +0000 (0:00:08.920) 0:10:10.192 ********* 2025-07-12 13:54:33.099111 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 13:54:33.099116 | orchestrator | 2025-07-12 13:54:33.099121 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-07-12 13:54:33.099126 | orchestrator | Saturday 12 July 2025 13:52:56 +0000 (0:00:03.836) 0:10:14.028 ********* 2025-07-12 13:54:33.099133 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.099138 | orchestrator | 2025-07-12 13:54:33.099143 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-07-12 13:54:33.099148 | orchestrator | Saturday 12 July 2025 13:52:56 +0000 (0:00:00.778) 0:10:14.806 ********* 2025-07-12 13:54:33.099156 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 13:54:33.099161 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 13:54:33.099166 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 13:54:33.099171 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-07-12 13:54:33.099176 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-07-12 13:54:33.099181 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-07-12 13:54:33.099185 | orchestrator | 2025-07-12 13:54:33.099190 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-07-12 13:54:33.099195 | orchestrator | Saturday 12 July 2025 13:52:58 +0000 (0:00:01.141) 0:10:15.948 ********* 2025-07-12 13:54:33.099200 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.099205 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:33.099210 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:33.099214 | orchestrator | 2025-07-12 13:54:33.099219 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:33.099224 | orchestrator | Saturday 12 July 2025 13:53:00 +0000 (0:00:02.349) 0:10:18.298 ********* 2025-07-12 13:54:33.099229 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:54:33.099233 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:33.099238 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.099243 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:54:33.099248 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 13:54:33.099253 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.099257 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:54:33.099262 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 13:54:33.099267 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.099272 | orchestrator | 2025-07-12 13:54:33.099276 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-07-12 13:54:33.099281 | orchestrator | Saturday 12 July 2025 13:53:01 +0000 (0:00:01.455) 0:10:19.753 ********* 2025-07-12 13:54:33.099286 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.099291 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.099296 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.099301 | orchestrator | 2025-07-12 13:54:33.099306 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-07-12 13:54:33.099310 | orchestrator | Saturday 12 July 2025 13:53:04 +0000 (0:00:02.716) 0:10:22.470 ********* 2025-07-12 13:54:33.099315 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.099320 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.099325 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.099330 | orchestrator | 2025-07-12 13:54:33.099335 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-07-12 13:54:33.099339 | orchestrator | Saturday 12 July 2025 13:53:04 +0000 (0:00:00.360) 0:10:22.831 ********* 2025-07-12 13:54:33.099344 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.099349 | orchestrator | 2025-07-12 13:54:33.099354 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-07-12 13:54:33.099359 | orchestrator | Saturday 12 July 2025 13:53:05 +0000 (0:00:00.790) 0:10:23.621 ********* 2025-07-12 13:54:33.099366 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.099371 | orchestrator | 2025-07-12 13:54:33.099376 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-07-12 13:54:33.099381 | orchestrator | Saturday 12 July 2025 13:53:06 +0000 (0:00:00.543) 0:10:24.164 ********* 2025-07-12 13:54:33.099389 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.099394 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.099399 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.099404 | orchestrator | 2025-07-12 13:54:33.099409 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-07-12 13:54:33.099414 | orchestrator | Saturday 12 July 2025 13:53:07 +0000 (0:00:01.200) 0:10:25.365 ********* 2025-07-12 13:54:33.099418 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.099423 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.099428 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.099433 | orchestrator | 2025-07-12 13:54:33.099438 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-07-12 13:54:33.099456 | orchestrator | Saturday 12 July 2025 13:53:08 +0000 (0:00:01.377) 0:10:26.743 ********* 2025-07-12 13:54:33.099461 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.099465 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.099470 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.099475 | orchestrator | 2025-07-12 13:54:33.099480 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-07-12 13:54:33.099485 | orchestrator | Saturday 12 July 2025 13:53:10 +0000 (0:00:01.733) 0:10:28.476 ********* 2025-07-12 13:54:33.099490 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.099494 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.099499 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.099504 | orchestrator | 2025-07-12 13:54:33.099509 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-07-12 13:54:33.099513 | orchestrator | Saturday 12 July 2025 13:53:12 +0000 (0:00:01.938) 0:10:30.415 ********* 2025-07-12 13:54:33.099519 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.099530 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.099537 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.099542 | orchestrator | 2025-07-12 13:54:33.099547 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:33.099552 | orchestrator | Saturday 12 July 2025 13:53:14 +0000 (0:00:01.530) 0:10:31.946 ********* 2025-07-12 13:54:33.099557 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.099562 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.099566 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.099571 | orchestrator | 2025-07-12 13:54:33.099576 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 13:54:33.099581 | orchestrator | Saturday 12 July 2025 13:53:14 +0000 (0:00:00.653) 0:10:32.599 ********* 2025-07-12 13:54:33.099585 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.099590 | orchestrator | 2025-07-12 13:54:33.099595 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 13:54:33.099600 | orchestrator | Saturday 12 July 2025 13:53:15 +0000 (0:00:00.864) 0:10:33.464 ********* 2025-07-12 13:54:33.099605 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.099610 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.099614 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.099619 | orchestrator | 2025-07-12 13:54:33.099624 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 13:54:33.099629 | orchestrator | Saturday 12 July 2025 13:53:15 +0000 (0:00:00.364) 0:10:33.828 ********* 2025-07-12 13:54:33.099634 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.099638 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.099643 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.099648 | orchestrator | 2025-07-12 13:54:33.099653 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 13:54:33.099658 | orchestrator | Saturday 12 July 2025 13:53:17 +0000 (0:00:01.206) 0:10:35.035 ********* 2025-07-12 13:54:33.099662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.099670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.099675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.099680 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.099685 | orchestrator | 2025-07-12 13:54:33.099690 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 13:54:33.099694 | orchestrator | Saturday 12 July 2025 13:53:18 +0000 (0:00:00.887) 0:10:35.922 ********* 2025-07-12 13:54:33.099699 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.099704 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.099709 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.099713 | orchestrator | 2025-07-12 13:54:33.099718 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 13:54:33.099723 | orchestrator | 2025-07-12 13:54:33.099728 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 13:54:33.099733 | orchestrator | Saturday 12 July 2025 13:53:18 +0000 (0:00:00.811) 0:10:36.734 ********* 2025-07-12 13:54:33.099737 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.099742 | orchestrator | 2025-07-12 13:54:33.099747 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 13:54:33.099752 | orchestrator | Saturday 12 July 2025 13:53:19 +0000 (0:00:00.487) 0:10:37.221 ********* 2025-07-12 13:54:33.099756 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.099761 | orchestrator | 2025-07-12 13:54:33.099766 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 13:54:33.099771 | orchestrator | Saturday 12 July 2025 13:53:20 +0000 (0:00:00.760) 0:10:37.981 ********* 2025-07-12 13:54:33.099776 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.099781 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.099785 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.099790 | orchestrator | 2025-07-12 13:54:33.099798 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 13:54:33.099803 | orchestrator | Saturday 12 July 2025 13:53:20 +0000 (0:00:00.311) 0:10:38.293 ********* 2025-07-12 13:54:33.099808 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.099812 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.099817 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.099822 | orchestrator | 2025-07-12 13:54:33.099827 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 13:54:33.099832 | orchestrator | Saturday 12 July 2025 13:53:21 +0000 (0:00:00.717) 0:10:39.011 ********* 2025-07-12 13:54:33.099836 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.099841 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.099846 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.099851 | orchestrator | 2025-07-12 13:54:33.099856 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 13:54:33.099860 | orchestrator | Saturday 12 July 2025 13:53:21 +0000 (0:00:00.709) 0:10:39.720 ********* 2025-07-12 13:54:33.099865 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.099870 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.099875 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.099880 | orchestrator | 2025-07-12 13:54:33.099885 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 13:54:33.099889 | orchestrator | Saturday 12 July 2025 13:53:22 +0000 (0:00:01.054) 0:10:40.775 ********* 2025-07-12 13:54:33.099894 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.099899 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.099904 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.099909 | orchestrator | 2025-07-12 13:54:33.099913 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 13:54:33.099918 | orchestrator | Saturday 12 July 2025 13:53:23 +0000 (0:00:00.323) 0:10:41.098 ********* 2025-07-12 13:54:33.099928 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.099933 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.099938 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.099943 | orchestrator | 2025-07-12 13:54:33.099950 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 13:54:33.099955 | orchestrator | Saturday 12 July 2025 13:53:23 +0000 (0:00:00.316) 0:10:41.414 ********* 2025-07-12 13:54:33.099960 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.099964 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.099969 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.099974 | orchestrator | 2025-07-12 13:54:33.099979 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 13:54:33.099984 | orchestrator | Saturday 12 July 2025 13:53:23 +0000 (0:00:00.332) 0:10:41.746 ********* 2025-07-12 13:54:33.099989 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.099993 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.099998 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.100003 | orchestrator | 2025-07-12 13:54:33.100008 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 13:54:33.100013 | orchestrator | Saturday 12 July 2025 13:53:24 +0000 (0:00:01.007) 0:10:42.754 ********* 2025-07-12 13:54:33.100018 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.100023 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.100028 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.100032 | orchestrator | 2025-07-12 13:54:33.100037 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 13:54:33.100042 | orchestrator | Saturday 12 July 2025 13:53:25 +0000 (0:00:00.775) 0:10:43.529 ********* 2025-07-12 13:54:33.100047 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100052 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.100057 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.100061 | orchestrator | 2025-07-12 13:54:33.100066 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 13:54:33.100071 | orchestrator | Saturday 12 July 2025 13:53:25 +0000 (0:00:00.323) 0:10:43.853 ********* 2025-07-12 13:54:33.100076 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100081 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.100085 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.100090 | orchestrator | 2025-07-12 13:54:33.100095 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 13:54:33.100100 | orchestrator | Saturday 12 July 2025 13:53:26 +0000 (0:00:00.332) 0:10:44.185 ********* 2025-07-12 13:54:33.100105 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.100109 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.100114 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.100119 | orchestrator | 2025-07-12 13:54:33.100124 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 13:54:33.100129 | orchestrator | Saturday 12 July 2025 13:53:26 +0000 (0:00:00.603) 0:10:44.788 ********* 2025-07-12 13:54:33.100133 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.100138 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.100143 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.100148 | orchestrator | 2025-07-12 13:54:33.100153 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 13:54:33.100157 | orchestrator | Saturday 12 July 2025 13:53:27 +0000 (0:00:00.423) 0:10:45.212 ********* 2025-07-12 13:54:33.100162 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.100167 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.100172 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.100177 | orchestrator | 2025-07-12 13:54:33.100182 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 13:54:33.100186 | orchestrator | Saturday 12 July 2025 13:53:27 +0000 (0:00:00.457) 0:10:45.670 ********* 2025-07-12 13:54:33.100191 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100196 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.100205 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.100210 | orchestrator | 2025-07-12 13:54:33.100215 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 13:54:33.100220 | orchestrator | Saturday 12 July 2025 13:53:28 +0000 (0:00:00.328) 0:10:45.998 ********* 2025-07-12 13:54:33.100224 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100229 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.100234 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.100239 | orchestrator | 2025-07-12 13:54:33.100243 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 13:54:33.100251 | orchestrator | Saturday 12 July 2025 13:53:28 +0000 (0:00:00.599) 0:10:46.598 ********* 2025-07-12 13:54:33.100256 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100261 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.100266 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.100271 | orchestrator | 2025-07-12 13:54:33.100275 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 13:54:33.100280 | orchestrator | Saturday 12 July 2025 13:53:29 +0000 (0:00:00.319) 0:10:46.917 ********* 2025-07-12 13:54:33.100285 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.100290 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.100295 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.100300 | orchestrator | 2025-07-12 13:54:33.100305 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 13:54:33.100309 | orchestrator | Saturday 12 July 2025 13:53:29 +0000 (0:00:00.356) 0:10:47.273 ********* 2025-07-12 13:54:33.100314 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.100319 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.100324 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.100329 | orchestrator | 2025-07-12 13:54:33.100333 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-07-12 13:54:33.100338 | orchestrator | Saturday 12 July 2025 13:53:30 +0000 (0:00:00.785) 0:10:48.059 ********* 2025-07-12 13:54:33.100343 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.100348 | orchestrator | 2025-07-12 13:54:33.100353 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 13:54:33.100358 | orchestrator | Saturday 12 July 2025 13:53:30 +0000 (0:00:00.539) 0:10:48.598 ********* 2025-07-12 13:54:33.100363 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.100367 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:33.100372 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:33.100377 | orchestrator | 2025-07-12 13:54:33.100384 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:33.100390 | orchestrator | Saturday 12 July 2025 13:53:32 +0000 (0:00:02.130) 0:10:50.728 ********* 2025-07-12 13:54:33.100394 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:54:33.100399 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 13:54:33.100404 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.100409 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:54:33.100414 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 13:54:33.100418 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.100423 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:54:33.100428 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 13:54:33.100433 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.100438 | orchestrator | 2025-07-12 13:54:33.100471 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-07-12 13:54:33.100476 | orchestrator | Saturday 12 July 2025 13:53:34 +0000 (0:00:01.615) 0:10:52.344 ********* 2025-07-12 13:54:33.100481 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100486 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.100494 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.100500 | orchestrator | 2025-07-12 13:54:33.100504 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-07-12 13:54:33.100509 | orchestrator | Saturday 12 July 2025 13:53:34 +0000 (0:00:00.346) 0:10:52.691 ********* 2025-07-12 13:54:33.100514 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.100519 | orchestrator | 2025-07-12 13:54:33.100524 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-07-12 13:54:33.100529 | orchestrator | Saturday 12 July 2025 13:53:35 +0000 (0:00:00.546) 0:10:53.238 ********* 2025-07-12 13:54:33.100534 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.100539 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.100544 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.100549 | orchestrator | 2025-07-12 13:54:33.100554 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-07-12 13:54:33.100558 | orchestrator | Saturday 12 July 2025 13:53:36 +0000 (0:00:01.118) 0:10:54.357 ********* 2025-07-12 13:54:33.100563 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.100568 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 13:54:33.100573 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.100578 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.100582 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 13:54:33.100587 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 13:54:33.100591 | orchestrator | 2025-07-12 13:54:33.100596 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 13:54:33.100605 | orchestrator | Saturday 12 July 2025 13:53:41 +0000 (0:00:05.082) 0:10:59.439 ********* 2025-07-12 13:54:33.100610 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.100614 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:33.100619 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.100623 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:33.100628 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:54:33.100633 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:54:33.100637 | orchestrator | 2025-07-12 13:54:33.100642 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 13:54:33.100646 | orchestrator | Saturday 12 July 2025 13:53:44 +0000 (0:00:02.603) 0:11:02.043 ********* 2025-07-12 13:54:33.100651 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 13:54:33.100655 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.100660 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 13:54:33.100665 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.100669 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 13:54:33.100674 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.100678 | orchestrator | 2025-07-12 13:54:33.100683 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-07-12 13:54:33.100690 | orchestrator | Saturday 12 July 2025 13:53:45 +0000 (0:00:01.420) 0:11:03.464 ********* 2025-07-12 13:54:33.100695 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-07-12 13:54:33.100699 | orchestrator | 2025-07-12 13:54:33.100704 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-07-12 13:54:33.100711 | orchestrator | Saturday 12 July 2025 13:53:45 +0000 (0:00:00.253) 0:11:03.718 ********* 2025-07-12 13:54:33.100716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100739 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100743 | orchestrator | 2025-07-12 13:54:33.100748 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-07-12 13:54:33.100753 | orchestrator | Saturday 12 July 2025 13:53:46 +0000 (0:00:00.928) 0:11:04.646 ********* 2025-07-12 13:54:33.100757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 13:54:33.100780 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100785 | orchestrator | 2025-07-12 13:54:33.100789 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-07-12 13:54:33.100794 | orchestrator | Saturday 12 July 2025 13:53:48 +0000 (0:00:01.399) 0:11:06.046 ********* 2025-07-12 13:54:33.100799 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:33.100803 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:33.100808 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:33.100812 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:33.100817 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 13:54:33.100821 | orchestrator | 2025-07-12 13:54:33.100826 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-07-12 13:54:33.100833 | orchestrator | Saturday 12 July 2025 13:54:18 +0000 (0:00:29.919) 0:11:35.965 ********* 2025-07-12 13:54:33.100838 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100846 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.100850 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.100855 | orchestrator | 2025-07-12 13:54:33.100860 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-07-12 13:54:33.100864 | orchestrator | Saturday 12 July 2025 13:54:18 +0000 (0:00:00.265) 0:11:36.231 ********* 2025-07-12 13:54:33.100869 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.100873 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.100878 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.100883 | orchestrator | 2025-07-12 13:54:33.100887 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-07-12 13:54:33.100892 | orchestrator | Saturday 12 July 2025 13:54:18 +0000 (0:00:00.273) 0:11:36.505 ********* 2025-07-12 13:54:33.100896 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.100901 | orchestrator | 2025-07-12 13:54:33.100906 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-07-12 13:54:33.100910 | orchestrator | Saturday 12 July 2025 13:54:19 +0000 (0:00:00.731) 0:11:37.236 ********* 2025-07-12 13:54:33.100915 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.100919 | orchestrator | 2025-07-12 13:54:33.100924 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-07-12 13:54:33.100929 | orchestrator | Saturday 12 July 2025 13:54:19 +0000 (0:00:00.550) 0:11:37.786 ********* 2025-07-12 13:54:33.100933 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.100938 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.100942 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.100947 | orchestrator | 2025-07-12 13:54:33.100954 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-07-12 13:54:33.100958 | orchestrator | Saturday 12 July 2025 13:54:21 +0000 (0:00:01.253) 0:11:39.040 ********* 2025-07-12 13:54:33.100963 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.100967 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.100972 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.100976 | orchestrator | 2025-07-12 13:54:33.100981 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-07-12 13:54:33.100986 | orchestrator | Saturday 12 July 2025 13:54:22 +0000 (0:00:01.433) 0:11:40.473 ********* 2025-07-12 13:54:33.100990 | orchestrator | changed: [testbed-node-3] 2025-07-12 13:54:33.100995 | orchestrator | changed: [testbed-node-4] 2025-07-12 13:54:33.100999 | orchestrator | changed: [testbed-node-5] 2025-07-12 13:54:33.101004 | orchestrator | 2025-07-12 13:54:33.101009 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-07-12 13:54:33.101013 | orchestrator | Saturday 12 July 2025 13:54:24 +0000 (0:00:01.830) 0:11:42.303 ********* 2025-07-12 13:54:33.101018 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.101022 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.101027 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 13:54:33.101032 | orchestrator | 2025-07-12 13:54:33.101036 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 13:54:33.101041 | orchestrator | Saturday 12 July 2025 13:54:27 +0000 (0:00:02.706) 0:11:45.010 ********* 2025-07-12 13:54:33.101045 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.101050 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.101054 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.101059 | orchestrator | 2025-07-12 13:54:33.101063 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 13:54:33.101071 | orchestrator | Saturday 12 July 2025 13:54:27 +0000 (0:00:00.348) 0:11:45.358 ********* 2025-07-12 13:54:33.101076 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:54:33.101081 | orchestrator | 2025-07-12 13:54:33.101085 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 13:54:33.101090 | orchestrator | Saturday 12 July 2025 13:54:28 +0000 (0:00:00.558) 0:11:45.916 ********* 2025-07-12 13:54:33.101094 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.101099 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.101103 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.101108 | orchestrator | 2025-07-12 13:54:33.101112 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 13:54:33.101117 | orchestrator | Saturday 12 July 2025 13:54:28 +0000 (0:00:00.562) 0:11:46.478 ********* 2025-07-12 13:54:33.101122 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.101126 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:54:33.101131 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:54:33.101135 | orchestrator | 2025-07-12 13:54:33.101140 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 13:54:33.101144 | orchestrator | Saturday 12 July 2025 13:54:28 +0000 (0:00:00.354) 0:11:46.833 ********* 2025-07-12 13:54:33.101149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:54:33.101153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:54:33.101158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:54:33.101162 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:54:33.101167 | orchestrator | 2025-07-12 13:54:33.101172 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 13:54:33.101176 | orchestrator | Saturday 12 July 2025 13:54:29 +0000 (0:00:00.596) 0:11:47.429 ********* 2025-07-12 13:54:33.101181 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:54:33.101188 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:54:33.101193 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:54:33.101198 | orchestrator | 2025-07-12 13:54:33.101202 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:54:33.101207 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-07-12 13:54:33.101212 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-07-12 13:54:33.101216 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-07-12 13:54:33.101221 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-07-12 13:54:33.101226 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-07-12 13:54:33.101230 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-07-12 13:54:33.101235 | orchestrator | 2025-07-12 13:54:33.101240 | orchestrator | 2025-07-12 13:54:33.101244 | orchestrator | 2025-07-12 13:54:33.101249 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:54:33.101254 | orchestrator | Saturday 12 July 2025 13:54:29 +0000 (0:00:00.240) 0:11:47.670 ********* 2025-07-12 13:54:33.101261 | orchestrator | =============================================================================== 2025-07-12 13:54:33.101266 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------ 104.85s 2025-07-12 13:54:33.101270 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.19s 2025-07-12 13:54:33.101279 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.20s 2025-07-12 13:54:33.101283 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.92s 2025-07-12 13:54:33.101288 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.93s 2025-07-12 13:54:33.101292 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.97s 2025-07-12 13:54:33.101297 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.99s 2025-07-12 13:54:33.101302 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.88s 2025-07-12 13:54:33.101306 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.26s 2025-07-12 13:54:33.101311 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.92s 2025-07-12 13:54:33.101315 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.57s 2025-07-12 13:54:33.101320 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.34s 2025-07-12 13:54:33.101324 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.08s 2025-07-12 13:54:33.101329 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.73s 2025-07-12 13:54:33.101333 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.88s 2025-07-12 13:54:33.101338 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.84s 2025-07-12 13:54:33.101342 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.66s 2025-07-12 13:54:33.101347 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.49s 2025-07-12 13:54:33.101352 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.47s 2025-07-12 13:54:33.101356 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.21s 2025-07-12 13:54:33.101361 | orchestrator | 2025-07-12 13:54:33 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:54:33.101365 | orchestrator | 2025-07-12 13:54:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:36.126991 | orchestrator | 2025-07-12 13:54:36 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:36.128135 | orchestrator | 2025-07-12 13:54:36 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:36.129911 | orchestrator | 2025-07-12 13:54:36 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:54:36.129933 | orchestrator | 2025-07-12 13:54:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:39.182907 | orchestrator | 2025-07-12 13:54:39 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:39.184652 | orchestrator | 2025-07-12 13:54:39 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:39.186724 | orchestrator | 2025-07-12 13:54:39 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:54:39.187131 | orchestrator | 2025-07-12 13:54:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:42.240394 | orchestrator | 2025-07-12 13:54:42 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:42.243764 | orchestrator | 2025-07-12 13:54:42 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:42.246215 | orchestrator | 2025-07-12 13:54:42 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:54:42.246663 | orchestrator | 2025-07-12 13:54:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:45.295585 | orchestrator | 2025-07-12 13:54:45 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:45.296951 | orchestrator | 2025-07-12 13:54:45 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:45.298519 | orchestrator | 2025-07-12 13:54:45 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:54:45.298778 | orchestrator | 2025-07-12 13:54:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:48.355653 | orchestrator | 2025-07-12 13:54:48 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:48.356096 | orchestrator | 2025-07-12 13:54:48 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:48.357816 | orchestrator | 2025-07-12 13:54:48 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:54:48.357837 | orchestrator | 2025-07-12 13:54:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:51.414644 | orchestrator | 2025-07-12 13:54:51 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:51.417405 | orchestrator | 2025-07-12 13:54:51 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:51.420094 | orchestrator | 2025-07-12 13:54:51 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:54:51.420139 | orchestrator | 2025-07-12 13:54:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:54.470414 | orchestrator | 2025-07-12 13:54:54 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:54.471741 | orchestrator | 2025-07-12 13:54:54 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:54.473578 | orchestrator | 2025-07-12 13:54:54 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:54:54.473736 | orchestrator | 2025-07-12 13:54:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:54:57.527731 | orchestrator | 2025-07-12 13:54:57 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:54:57.529847 | orchestrator | 2025-07-12 13:54:57 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:54:57.531304 | orchestrator | 2025-07-12 13:54:57 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:54:57.531337 | orchestrator | 2025-07-12 13:54:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:00.582603 | orchestrator | 2025-07-12 13:55:00 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:55:00.584062 | orchestrator | 2025-07-12 13:55:00 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:55:00.584091 | orchestrator | 2025-07-12 13:55:00 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:00.584103 | orchestrator | 2025-07-12 13:55:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:03.646408 | orchestrator | 2025-07-12 13:55:03 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:55:03.648391 | orchestrator | 2025-07-12 13:55:03 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:55:03.651619 | orchestrator | 2025-07-12 13:55:03 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:03.651650 | orchestrator | 2025-07-12 13:55:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:06.699466 | orchestrator | 2025-07-12 13:55:06 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:55:06.700506 | orchestrator | 2025-07-12 13:55:06 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:55:06.702207 | orchestrator | 2025-07-12 13:55:06 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:06.702558 | orchestrator | 2025-07-12 13:55:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:09.742697 | orchestrator | 2025-07-12 13:55:09 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state STARTED 2025-07-12 13:55:09.743901 | orchestrator | 2025-07-12 13:55:09 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:55:09.745518 | orchestrator | 2025-07-12 13:55:09 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:09.745557 | orchestrator | 2025-07-12 13:55:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:12.781570 | orchestrator | 2025-07-12 13:55:12 | INFO  | Task db5de3ec-1675-415c-9362-f5165b0d6a24 is in state SUCCESS 2025-07-12 13:55:12.783637 | orchestrator | 2025-07-12 13:55:12.783746 | orchestrator | 2025-07-12 13:55:12.783761 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:55:12.783775 | orchestrator | 2025-07-12 13:55:12.783786 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:55:12.783798 | orchestrator | Saturday 12 July 2025 13:52:08 +0000 (0:00:00.259) 0:00:00.259 ********* 2025-07-12 13:55:12.783809 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:12.783822 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:12.783833 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:12.783844 | orchestrator | 2025-07-12 13:55:12.783855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:55:12.783867 | orchestrator | Saturday 12 July 2025 13:52:09 +0000 (0:00:00.295) 0:00:00.554 ********* 2025-07-12 13:55:12.783879 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-07-12 13:55:12.783891 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-07-12 13:55:12.783901 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-07-12 13:55:12.783912 | orchestrator | 2025-07-12 13:55:12.783924 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-07-12 13:55:12.783935 | orchestrator | 2025-07-12 13:55:12.783946 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 13:55:12.783956 | orchestrator | Saturday 12 July 2025 13:52:09 +0000 (0:00:00.433) 0:00:00.988 ********* 2025-07-12 13:55:12.783968 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:55:12.783979 | orchestrator | 2025-07-12 13:55:12.783990 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-07-12 13:55:12.784002 | orchestrator | Saturday 12 July 2025 13:52:09 +0000 (0:00:00.493) 0:00:01.481 ********* 2025-07-12 13:55:12.784021 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:55:12.784040 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:55:12.784051 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 13:55:12.784062 | orchestrator | 2025-07-12 13:55:12.784073 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-07-12 13:55:12.784084 | orchestrator | Saturday 12 July 2025 13:52:10 +0000 (0:00:00.670) 0:00:02.151 ********* 2025-07-12 13:55:12.784099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.784148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.784194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.784210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.784225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.784249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.784263 | orchestrator | 2025-07-12 13:55:12.784275 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 13:55:12.784288 | orchestrator | Saturday 12 July 2025 13:52:12 +0000 (0:00:01.713) 0:00:03.864 ********* 2025-07-12 13:55:12.784301 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:55:12.784313 | orchestrator | 2025-07-12 13:55:12.784330 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-07-12 13:55:12.784342 | orchestrator | Saturday 12 July 2025 13:52:12 +0000 (0:00:00.523) 0:00:04.388 ********* 2025-07-12 13:55:12.784367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.784381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.784395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.784440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.784468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.784484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.784497 | orchestrator | 2025-07-12 13:55:12.784509 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-07-12 13:55:12.784522 | orchestrator | Saturday 12 July 2025 13:52:15 +0000 (0:00:02.646) 0:00:07.034 ********* 2025-07-12 13:55:12.784534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:55:12.784556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:55:12.784569 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:12.784588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:55:12.784610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:55:12.784622 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:12.784634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:55:12.784653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:55:12.784665 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:12.784677 | orchestrator | 2025-07-12 13:55:12.784688 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-07-12 13:55:12.784699 | orchestrator | Saturday 12 July 2025 13:52:16 +0000 (0:00:01.499) 0:00:08.534 ********* 2025-07-12 13:55:12.784716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:55:12.784736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:55:12.784748 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:12.784760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:55:12.784779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:55:12.784791 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:12.784802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 13:55:12.784824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 13:55:12.784836 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:12.784847 | orchestrator | 2025-07-12 13:55:12.784858 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-07-12 13:55:12.784869 | orchestrator | Saturday 12 July 2025 13:52:18 +0000 (0:00:01.040) 0:00:09.575 ********* 2025-07-12 13:55:12.784885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.784996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.785018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.785046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.785059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.785080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.785092 | orchestrator | 2025-07-12 13:55:12.785103 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-07-12 13:55:12.785114 | orchestrator | Saturday 12 July 2025 13:52:20 +0000 (0:00:02.479) 0:00:12.054 ********* 2025-07-12 13:55:12.785125 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:12.785136 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:12.785147 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:12.785158 | orchestrator | 2025-07-12 13:55:12.785169 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-07-12 13:55:12.785180 | orchestrator | Saturday 12 July 2025 13:52:24 +0000 (0:00:03.781) 0:00:15.836 ********* 2025-07-12 13:55:12.785190 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:12.785201 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:12.785212 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:12.785223 | orchestrator | 2025-07-12 13:55:12.785234 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-07-12 13:55:12.785245 | orchestrator | Saturday 12 July 2025 13:52:26 +0000 (0:00:02.009) 0:00:17.845 ********* 2025-07-12 13:55:12.785261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.785281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.785299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 13:55:12.785311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.785329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.785350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 13:55:12.785368 | orchestrator | 2025-07-12 13:55:12.785380 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 13:55:12.785391 | orchestrator | Saturday 12 July 2025 13:52:28 +0000 (0:00:02.596) 0:00:20.441 ********* 2025-07-12 13:55:12.785402 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:12.785429 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:12.785441 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:12.785451 | orchestrator | 2025-07-12 13:55:12.785462 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 13:55:12.785473 | orchestrator | Saturday 12 July 2025 13:52:29 +0000 (0:00:00.373) 0:00:20.815 ********* 2025-07-12 13:55:12.785484 | orchestrator | 2025-07-12 13:55:12.785494 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 13:55:12.785505 | orchestrator | Saturday 12 July 2025 13:52:29 +0000 (0:00:00.066) 0:00:20.881 ********* 2025-07-12 13:55:12.785516 | orchestrator | 2025-07-12 13:55:12.785527 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 13:55:12.785537 | orchestrator | Saturday 12 July 2025 13:52:29 +0000 (0:00:00.065) 0:00:20.947 ********* 2025-07-12 13:55:12.785548 | orchestrator | 2025-07-12 13:55:12.785558 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-07-12 13:55:12.785569 | orchestrator | Saturday 12 July 2025 13:52:29 +0000 (0:00:00.259) 0:00:21.206 ********* 2025-07-12 13:55:12.785580 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:12.785590 | orchestrator | 2025-07-12 13:55:12.785601 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-07-12 13:55:12.785612 | orchestrator | Saturday 12 July 2025 13:52:29 +0000 (0:00:00.242) 0:00:21.448 ********* 2025-07-12 13:55:12.785623 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:12.785633 | orchestrator | 2025-07-12 13:55:12.785644 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-07-12 13:55:12.785655 | orchestrator | Saturday 12 July 2025 13:52:30 +0000 (0:00:00.211) 0:00:21.660 ********* 2025-07-12 13:55:12.785666 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:12.785677 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:12.785687 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:12.785698 | orchestrator | 2025-07-12 13:55:12.785709 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-07-12 13:55:12.785720 | orchestrator | Saturday 12 July 2025 13:53:38 +0000 (0:01:08.306) 0:01:29.966 ********* 2025-07-12 13:55:12.785731 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:12.785742 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:12.785753 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:12.785772 | orchestrator | 2025-07-12 13:55:12.785791 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 13:55:12.785809 | orchestrator | Saturday 12 July 2025 13:55:00 +0000 (0:01:22.170) 0:02:52.136 ********* 2025-07-12 13:55:12.785826 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:55:12.785844 | orchestrator | 2025-07-12 13:55:12.785861 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-07-12 13:55:12.785879 | orchestrator | Saturday 12 July 2025 13:55:01 +0000 (0:00:00.719) 0:02:52.856 ********* 2025-07-12 13:55:12.785898 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:12.785917 | orchestrator | 2025-07-12 13:55:12.785955 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-07-12 13:55:12.785974 | orchestrator | Saturday 12 July 2025 13:55:03 +0000 (0:00:02.419) 0:02:55.275 ********* 2025-07-12 13:55:12.785989 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:12.786000 | orchestrator | 2025-07-12 13:55:12.786011 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-07-12 13:55:12.786093 | orchestrator | Saturday 12 July 2025 13:55:05 +0000 (0:00:02.148) 0:02:57.423 ********* 2025-07-12 13:55:12.786106 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:12.786116 | orchestrator | 2025-07-12 13:55:12.786127 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-07-12 13:55:12.786138 | orchestrator | Saturday 12 July 2025 13:55:08 +0000 (0:00:02.677) 0:03:00.101 ********* 2025-07-12 13:55:12.786149 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:12.786159 | orchestrator | 2025-07-12 13:55:12.786177 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:55:12.786189 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 13:55:12.786202 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:55:12.786213 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 13:55:12.786224 | orchestrator | 2025-07-12 13:55:12.786234 | orchestrator | 2025-07-12 13:55:12.786245 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:55:12.786266 | orchestrator | Saturday 12 July 2025 13:55:11 +0000 (0:00:02.444) 0:03:02.545 ********* 2025-07-12 13:55:12.786277 | orchestrator | =============================================================================== 2025-07-12 13:55:12.786288 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.17s 2025-07-12 13:55:12.786299 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.31s 2025-07-12 13:55:12.786310 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.78s 2025-07-12 13:55:12.786320 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.68s 2025-07-12 13:55:12.786331 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.65s 2025-07-12 13:55:12.786341 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.60s 2025-07-12 13:55:12.786352 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.48s 2025-07-12 13:55:12.786363 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.44s 2025-07-12 13:55:12.786375 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.42s 2025-07-12 13:55:12.786395 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.15s 2025-07-12 13:55:12.786433 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.01s 2025-07-12 13:55:12.786452 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2025-07-12 13:55:12.786470 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.50s 2025-07-12 13:55:12.786487 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.04s 2025-07-12 13:55:12.786505 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.72s 2025-07-12 13:55:12.786522 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2025-07-12 13:55:12.786533 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-07-12 13:55:12.786544 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-07-12 13:55:12.786555 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-07-12 13:55:12.786576 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.39s 2025-07-12 13:55:12.786587 | orchestrator | 2025-07-12 13:55:12 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:55:12.786598 | orchestrator | 2025-07-12 13:55:12 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:12.786609 | orchestrator | 2025-07-12 13:55:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:15.836210 | orchestrator | 2025-07-12 13:55:15 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:55:15.837186 | orchestrator | 2025-07-12 13:55:15 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:15.837553 | orchestrator | 2025-07-12 13:55:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:18.881843 | orchestrator | 2025-07-12 13:55:18 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state STARTED 2025-07-12 13:55:18.885143 | orchestrator | 2025-07-12 13:55:18 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:18.885506 | orchestrator | 2025-07-12 13:55:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:21.937353 | orchestrator | 2025-07-12 13:55:21 | INFO  | Task d53dc7ca-3e9b-4768-82a4-7d10da54df06 is in state SUCCESS 2025-07-12 13:55:21.938382 | orchestrator | 2025-07-12 13:55:21.938515 | orchestrator | 2025-07-12 13:55:21.938530 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-07-12 13:55:21.938714 | orchestrator | 2025-07-12 13:55:21.938730 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 13:55:21.938742 | orchestrator | Saturday 12 July 2025 13:52:08 +0000 (0:00:00.103) 0:00:00.103 ********* 2025-07-12 13:55:21.938753 | orchestrator | ok: [localhost] => { 2025-07-12 13:55:21.938766 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-07-12 13:55:21.938777 | orchestrator | } 2025-07-12 13:55:21.938789 | orchestrator | 2025-07-12 13:55:21.938800 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-07-12 13:55:21.938811 | orchestrator | Saturday 12 July 2025 13:52:08 +0000 (0:00:00.059) 0:00:00.163 ********* 2025-07-12 13:55:21.938843 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-07-12 13:55:21.938856 | orchestrator | ...ignoring 2025-07-12 13:55:21.938867 | orchestrator | 2025-07-12 13:55:21.938878 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-07-12 13:55:21.938889 | orchestrator | Saturday 12 July 2025 13:52:11 +0000 (0:00:02.805) 0:00:02.968 ********* 2025-07-12 13:55:21.938900 | orchestrator | skipping: [localhost] 2025-07-12 13:55:21.938912 | orchestrator | 2025-07-12 13:55:21.938923 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-07-12 13:55:21.938934 | orchestrator | Saturday 12 July 2025 13:52:11 +0000 (0:00:00.054) 0:00:03.023 ********* 2025-07-12 13:55:21.938945 | orchestrator | ok: [localhost] 2025-07-12 13:55:21.938955 | orchestrator | 2025-07-12 13:55:21.938966 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:55:21.938977 | orchestrator | 2025-07-12 13:55:21.938988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:55:21.938998 | orchestrator | Saturday 12 July 2025 13:52:11 +0000 (0:00:00.162) 0:00:03.185 ********* 2025-07-12 13:55:21.939009 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.939020 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:21.939031 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:21.939042 | orchestrator | 2025-07-12 13:55:21.939052 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:55:21.939063 | orchestrator | Saturday 12 July 2025 13:52:12 +0000 (0:00:00.308) 0:00:03.494 ********* 2025-07-12 13:55:21.939105 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 13:55:21.939117 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 13:55:21.939128 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 13:55:21.939139 | orchestrator | 2025-07-12 13:55:21.939149 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 13:55:21.939160 | orchestrator | 2025-07-12 13:55:21.939171 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 13:55:21.939182 | orchestrator | Saturday 12 July 2025 13:52:12 +0000 (0:00:00.640) 0:00:04.135 ********* 2025-07-12 13:55:21.939193 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 13:55:21.939203 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 13:55:21.939214 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 13:55:21.939225 | orchestrator | 2025-07-12 13:55:21.939236 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 13:55:21.939246 | orchestrator | Saturday 12 July 2025 13:52:13 +0000 (0:00:00.475) 0:00:04.610 ********* 2025-07-12 13:55:21.939257 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:55:21.939268 | orchestrator | 2025-07-12 13:55:21.939279 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-07-12 13:55:21.939290 | orchestrator | Saturday 12 July 2025 13:52:13 +0000 (0:00:00.686) 0:00:05.296 ********* 2025-07-12 13:55:21.939322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:21.939347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:21.939377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:21.939399 | orchestrator | 2025-07-12 13:55:21.939457 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-07-12 13:55:21.939476 | orchestrator | Saturday 12 July 2025 13:52:17 +0000 (0:00:03.544) 0:00:08.841 ********* 2025-07-12 13:55:21.939496 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.939517 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.939648 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.939664 | orchestrator | 2025-07-12 13:55:21.939676 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-07-12 13:55:21.939689 | orchestrator | Saturday 12 July 2025 13:52:18 +0000 (0:00:00.962) 0:00:09.803 ********* 2025-07-12 13:55:21.939701 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.939712 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.939723 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.939734 | orchestrator | 2025-07-12 13:55:21.939745 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-07-12 13:55:21.939763 | orchestrator | Saturday 12 July 2025 13:52:19 +0000 (0:00:01.543) 0:00:11.346 ********* 2025-07-12 13:55:21.939786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:21.939810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:21.939829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:21.939849 | orchestrator | 2025-07-12 13:55:21.939860 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-07-12 13:55:21.939871 | orchestrator | Saturday 12 July 2025 13:52:24 +0000 (0:00:04.415) 0:00:15.761 ********* 2025-07-12 13:55:21.939882 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.939893 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.939904 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.939914 | orchestrator | 2025-07-12 13:55:21.939925 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-07-12 13:55:21.939936 | orchestrator | Saturday 12 July 2025 13:52:25 +0000 (0:00:01.289) 0:00:17.051 ********* 2025-07-12 13:55:21.939947 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.939958 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:21.939969 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:21.939980 | orchestrator | 2025-07-12 13:55:21.939990 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 13:55:21.940001 | orchestrator | Saturday 12 July 2025 13:52:30 +0000 (0:00:04.827) 0:00:21.879 ********* 2025-07-12 13:55:21.940013 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:55:21.940024 | orchestrator | 2025-07-12 13:55:21.940034 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 13:55:21.940045 | orchestrator | Saturday 12 July 2025 13:52:30 +0000 (0:00:00.554) 0:00:22.434 ********* 2025-07-12 13:55:21.940071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:21.940091 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.940103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:21.940116 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.940135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:21.940154 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.940166 | orchestrator | 2025-07-12 13:55:21.940177 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 13:55:21.940188 | orchestrator | Saturday 12 July 2025 13:52:34 +0000 (0:00:03.533) 0:00:25.967 ********* 2025-07-12 13:55:21.940204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:21.940217 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.940235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:21.940254 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.940271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:21.940284 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.940295 | orchestrator | 2025-07-12 13:55:21.940306 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 13:55:21.940317 | orchestrator | Saturday 12 July 2025 13:52:37 +0000 (0:00:03.137) 0:00:29.105 ********* 2025-07-12 13:55:21.940328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:21.940355 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.940381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:21.940394 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.940426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 13:55:21.940447 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.940459 | orchestrator | 2025-07-12 13:55:21.940470 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-07-12 13:55:21.940481 | orchestrator | Saturday 12 July 2025 13:52:40 +0000 (0:00:03.027) 0:00:32.133 ********* 2025-07-12 13:55:21.940507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:21.940520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:21.940554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 13:55:21.940567 | orchestrator | 2025-07-12 13:55:21.940579 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-07-12 13:55:21.940589 | orchestrator | Saturday 12 July 2025 13:52:44 +0000 (0:00:03.339) 0:00:35.473 ********* 2025-07-12 13:55:21.940600 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.940611 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:21.940622 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:21.940633 | orchestrator | 2025-07-12 13:55:21.940644 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-07-12 13:55:21.940654 | orchestrator | Saturday 12 July 2025 13:52:45 +0000 (0:00:01.574) 0:00:37.047 ********* 2025-07-12 13:55:21.940759 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.940772 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:21.940782 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:21.940793 | orchestrator | 2025-07-12 13:55:21.940804 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-07-12 13:55:21.940815 | orchestrator | Saturday 12 July 2025 13:52:46 +0000 (0:00:00.432) 0:00:37.480 ********* 2025-07-12 13:55:21.940826 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.940837 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:21.940848 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:21.940858 | orchestrator | 2025-07-12 13:55:21.940870 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-07-12 13:55:21.940881 | orchestrator | Saturday 12 July 2025 13:52:46 +0000 (0:00:00.543) 0:00:38.023 ********* 2025-07-12 13:55:21.940893 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-07-12 13:55:21.940904 | orchestrator | ...ignoring 2025-07-12 13:55:21.940915 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-07-12 13:55:21.940926 | orchestrator | ...ignoring 2025-07-12 13:55:21.940937 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-07-12 13:55:21.940956 | orchestrator | ...ignoring 2025-07-12 13:55:21.940967 | orchestrator | 2025-07-12 13:55:21.940978 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-07-12 13:55:21.940989 | orchestrator | Saturday 12 July 2025 13:52:57 +0000 (0:00:11.009) 0:00:49.032 ********* 2025-07-12 13:55:21.941000 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.941011 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:21.941022 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:21.941033 | orchestrator | 2025-07-12 13:55:21.941044 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-07-12 13:55:21.941055 | orchestrator | Saturday 12 July 2025 13:52:58 +0000 (0:00:00.731) 0:00:49.763 ********* 2025-07-12 13:55:21.941065 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.941076 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.941088 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.941098 | orchestrator | 2025-07-12 13:55:21.941109 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-07-12 13:55:21.941120 | orchestrator | Saturday 12 July 2025 13:52:58 +0000 (0:00:00.468) 0:00:50.232 ********* 2025-07-12 13:55:21.941131 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.941142 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.941152 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.941163 | orchestrator | 2025-07-12 13:55:21.941174 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-07-12 13:55:21.941185 | orchestrator | Saturday 12 July 2025 13:52:59 +0000 (0:00:00.440) 0:00:50.673 ********* 2025-07-12 13:55:21.941196 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.941207 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.941218 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.941228 | orchestrator | 2025-07-12 13:55:21.941239 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-07-12 13:55:21.941250 | orchestrator | Saturday 12 July 2025 13:52:59 +0000 (0:00:00.415) 0:00:51.088 ********* 2025-07-12 13:55:21.941261 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.941272 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:21.941283 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:21.941294 | orchestrator | 2025-07-12 13:55:21.941304 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-07-12 13:55:21.941315 | orchestrator | Saturday 12 July 2025 13:53:00 +0000 (0:00:00.666) 0:00:51.755 ********* 2025-07-12 13:55:21.941334 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.941345 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.941356 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.941367 | orchestrator | 2025-07-12 13:55:21.941378 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 13:55:21.941390 | orchestrator | Saturday 12 July 2025 13:53:00 +0000 (0:00:00.477) 0:00:52.232 ********* 2025-07-12 13:55:21.941402 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.941432 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.941444 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-07-12 13:55:21.941456 | orchestrator | 2025-07-12 13:55:21.941469 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-07-12 13:55:21.941481 | orchestrator | Saturday 12 July 2025 13:53:01 +0000 (0:00:00.370) 0:00:52.603 ********* 2025-07-12 13:55:21.941499 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.941511 | orchestrator | 2025-07-12 13:55:21.941523 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-07-12 13:55:21.941535 | orchestrator | Saturday 12 July 2025 13:53:11 +0000 (0:00:09.979) 0:01:02.582 ********* 2025-07-12 13:55:21.941547 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.941559 | orchestrator | 2025-07-12 13:55:21.941571 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 13:55:21.941590 | orchestrator | Saturday 12 July 2025 13:53:11 +0000 (0:00:00.138) 0:01:02.721 ********* 2025-07-12 13:55:21.941602 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.941614 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.941626 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.941638 | orchestrator | 2025-07-12 13:55:21.941650 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-07-12 13:55:21.941662 | orchestrator | Saturday 12 July 2025 13:53:12 +0000 (0:00:01.049) 0:01:03.771 ********* 2025-07-12 13:55:21.941674 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.941686 | orchestrator | 2025-07-12 13:55:21.941699 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-07-12 13:55:21.941711 | orchestrator | Saturday 12 July 2025 13:53:20 +0000 (0:00:07.776) 0:01:11.548 ********* 2025-07-12 13:55:21.941723 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.941736 | orchestrator | 2025-07-12 13:55:21.941747 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-07-12 13:55:21.941758 | orchestrator | Saturday 12 July 2025 13:53:21 +0000 (0:00:01.601) 0:01:13.149 ********* 2025-07-12 13:55:21.941769 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.941780 | orchestrator | 2025-07-12 13:55:21.941791 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-07-12 13:55:21.941802 | orchestrator | Saturday 12 July 2025 13:53:24 +0000 (0:00:02.563) 0:01:15.713 ********* 2025-07-12 13:55:21.941813 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.941824 | orchestrator | 2025-07-12 13:55:21.941835 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-07-12 13:55:21.941846 | orchestrator | Saturday 12 July 2025 13:53:24 +0000 (0:00:00.123) 0:01:15.836 ********* 2025-07-12 13:55:21.941857 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.941868 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.941879 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.941890 | orchestrator | 2025-07-12 13:55:21.941900 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-07-12 13:55:21.941911 | orchestrator | Saturday 12 July 2025 13:53:24 +0000 (0:00:00.527) 0:01:16.364 ********* 2025-07-12 13:55:21.941922 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.941933 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 13:55:21.941944 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:21.941955 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:21.941966 | orchestrator | 2025-07-12 13:55:21.941977 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 13:55:21.941988 | orchestrator | skipping: no hosts matched 2025-07-12 13:55:21.941999 | orchestrator | 2025-07-12 13:55:21.942010 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 13:55:21.942055 | orchestrator | 2025-07-12 13:55:21.942066 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 13:55:21.942077 | orchestrator | Saturday 12 July 2025 13:53:25 +0000 (0:00:00.331) 0:01:16.696 ********* 2025-07-12 13:55:21.942088 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:55:21.942098 | orchestrator | 2025-07-12 13:55:21.942109 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 13:55:21.942120 | orchestrator | Saturday 12 July 2025 13:53:48 +0000 (0:00:23.318) 0:01:40.014 ********* 2025-07-12 13:55:21.942131 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:21.942142 | orchestrator | 2025-07-12 13:55:21.942152 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 13:55:21.942163 | orchestrator | Saturday 12 July 2025 13:54:04 +0000 (0:00:15.645) 0:01:55.660 ********* 2025-07-12 13:55:21.942174 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:21.942185 | orchestrator | 2025-07-12 13:55:21.942196 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 13:55:21.942206 | orchestrator | 2025-07-12 13:55:21.942217 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 13:55:21.942235 | orchestrator | Saturday 12 July 2025 13:54:06 +0000 (0:00:02.532) 0:01:58.192 ********* 2025-07-12 13:55:21.942246 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:55:21.942257 | orchestrator | 2025-07-12 13:55:21.942268 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 13:55:21.942279 | orchestrator | Saturday 12 July 2025 13:54:24 +0000 (0:00:17.759) 0:02:15.952 ********* 2025-07-12 13:55:21.942289 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:21.942300 | orchestrator | 2025-07-12 13:55:21.942311 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 13:55:21.942322 | orchestrator | Saturday 12 July 2025 13:54:45 +0000 (0:00:20.603) 0:02:36.555 ********* 2025-07-12 13:55:21.942333 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:21.942344 | orchestrator | 2025-07-12 13:55:21.942355 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 13:55:21.942366 | orchestrator | 2025-07-12 13:55:21.942383 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 13:55:21.942395 | orchestrator | Saturday 12 July 2025 13:54:47 +0000 (0:00:02.773) 0:02:39.329 ********* 2025-07-12 13:55:21.942432 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.942444 | orchestrator | 2025-07-12 13:55:21.942455 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 13:55:21.942466 | orchestrator | Saturday 12 July 2025 13:55:04 +0000 (0:00:16.722) 0:02:56.051 ********* 2025-07-12 13:55:21.942477 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.942488 | orchestrator | 2025-07-12 13:55:21.942499 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 13:55:21.942510 | orchestrator | Saturday 12 July 2025 13:55:05 +0000 (0:00:00.608) 0:02:56.660 ********* 2025-07-12 13:55:21.942521 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.942532 | orchestrator | 2025-07-12 13:55:21.942549 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 13:55:21.942560 | orchestrator | 2025-07-12 13:55:21.942571 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 13:55:21.942582 | orchestrator | Saturday 12 July 2025 13:55:07 +0000 (0:00:02.445) 0:02:59.105 ********* 2025-07-12 13:55:21.942593 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:55:21.942604 | orchestrator | 2025-07-12 13:55:21.942614 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-07-12 13:55:21.942625 | orchestrator | Saturday 12 July 2025 13:55:08 +0000 (0:00:00.563) 0:02:59.669 ********* 2025-07-12 13:55:21.942636 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.942647 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.942658 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.942669 | orchestrator | 2025-07-12 13:55:21.942680 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-07-12 13:55:21.942691 | orchestrator | Saturday 12 July 2025 13:55:10 +0000 (0:00:02.363) 0:03:02.032 ********* 2025-07-12 13:55:21.942702 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.942713 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.942724 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.942735 | orchestrator | 2025-07-12 13:55:21.942746 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-07-12 13:55:21.942757 | orchestrator | Saturday 12 July 2025 13:55:12 +0000 (0:00:01.986) 0:03:04.019 ********* 2025-07-12 13:55:21.942768 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.942779 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.942790 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.942800 | orchestrator | 2025-07-12 13:55:21.942811 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-07-12 13:55:21.942822 | orchestrator | Saturday 12 July 2025 13:55:14 +0000 (0:00:02.091) 0:03:06.110 ********* 2025-07-12 13:55:21.942833 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.942851 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.942862 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:55:21.942873 | orchestrator | 2025-07-12 13:55:21.942884 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-07-12 13:55:21.942895 | orchestrator | Saturday 12 July 2025 13:55:16 +0000 (0:00:02.033) 0:03:08.144 ********* 2025-07-12 13:55:21.942906 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:55:21.942917 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:55:21.942927 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:55:21.942938 | orchestrator | 2025-07-12 13:55:21.942949 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 13:55:21.942960 | orchestrator | Saturday 12 July 2025 13:55:19 +0000 (0:00:03.040) 0:03:11.184 ********* 2025-07-12 13:55:21.942971 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:55:21.942982 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:55:21.942993 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:55:21.943003 | orchestrator | 2025-07-12 13:55:21.943014 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:55:21.943025 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 13:55:21.943037 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-07-12 13:55:21.943050 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 13:55:21.943061 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 13:55:21.943072 | orchestrator | 2025-07-12 13:55:21.943084 | orchestrator | 2025-07-12 13:55:21.943094 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:55:21.943105 | orchestrator | Saturday 12 July 2025 13:55:19 +0000 (0:00:00.238) 0:03:11.423 ********* 2025-07-12 13:55:21.943116 | orchestrator | =============================================================================== 2025-07-12 13:55:21.943127 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.08s 2025-07-12 13:55:21.943138 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.25s 2025-07-12 13:55:21.943149 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.72s 2025-07-12 13:55:21.943160 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.01s 2025-07-12 13:55:21.943171 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.98s 2025-07-12 13:55:21.943182 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.78s 2025-07-12 13:55:21.943198 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.31s 2025-07-12 13:55:21.943210 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.83s 2025-07-12 13:55:21.943221 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.42s 2025-07-12 13:55:21.943232 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.54s 2025-07-12 13:55:21.943243 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.53s 2025-07-12 13:55:21.943254 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.34s 2025-07-12 13:55:21.943264 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.14s 2025-07-12 13:55:21.943275 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.04s 2025-07-12 13:55:21.943291 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.03s 2025-07-12 13:55:21.943302 | orchestrator | Check MariaDB service --------------------------------------------------- 2.81s 2025-07-12 13:55:21.943320 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.56s 2025-07-12 13:55:21.943331 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.45s 2025-07-12 13:55:21.943342 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.36s 2025-07-12 13:55:21.943353 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.09s 2025-07-12 13:55:21.943364 | orchestrator | 2025-07-12 13:55:21 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:21.943375 | orchestrator | 2025-07-12 13:55:21 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:21.949562 | orchestrator | 2025-07-12 13:55:21 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:21.949596 | orchestrator | 2025-07-12 13:55:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:25.005790 | orchestrator | 2025-07-12 13:55:25 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:25.010257 | orchestrator | 2025-07-12 13:55:25 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:25.010303 | orchestrator | 2025-07-12 13:55:25 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:25.010325 | orchestrator | 2025-07-12 13:55:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:28.056797 | orchestrator | 2025-07-12 13:55:28 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:28.060884 | orchestrator | 2025-07-12 13:55:28 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:28.062369 | orchestrator | 2025-07-12 13:55:28 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:28.062396 | orchestrator | 2025-07-12 13:55:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:31.109762 | orchestrator | 2025-07-12 13:55:31 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:31.110600 | orchestrator | 2025-07-12 13:55:31 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:31.111919 | orchestrator | 2025-07-12 13:55:31 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:31.111944 | orchestrator | 2025-07-12 13:55:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:34.161673 | orchestrator | 2025-07-12 13:55:34 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:34.163000 | orchestrator | 2025-07-12 13:55:34 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:34.165092 | orchestrator | 2025-07-12 13:55:34 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:34.166791 | orchestrator | 2025-07-12 13:55:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:37.216765 | orchestrator | 2025-07-12 13:55:37 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:37.217961 | orchestrator | 2025-07-12 13:55:37 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:37.219801 | orchestrator | 2025-07-12 13:55:37 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:37.219831 | orchestrator | 2025-07-12 13:55:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:40.256238 | orchestrator | 2025-07-12 13:55:40 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:40.256524 | orchestrator | 2025-07-12 13:55:40 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:40.257475 | orchestrator | 2025-07-12 13:55:40 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:40.257502 | orchestrator | 2025-07-12 13:55:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:43.310612 | orchestrator | 2025-07-12 13:55:43 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:43.314586 | orchestrator | 2025-07-12 13:55:43 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:43.316849 | orchestrator | 2025-07-12 13:55:43 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:43.317166 | orchestrator | 2025-07-12 13:55:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:46.357419 | orchestrator | 2025-07-12 13:55:46 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:46.357901 | orchestrator | 2025-07-12 13:55:46 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:46.359026 | orchestrator | 2025-07-12 13:55:46 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:46.359060 | orchestrator | 2025-07-12 13:55:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:49.404191 | orchestrator | 2025-07-12 13:55:49 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:49.407584 | orchestrator | 2025-07-12 13:55:49 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:49.408220 | orchestrator | 2025-07-12 13:55:49 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:49.408507 | orchestrator | 2025-07-12 13:55:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:52.448772 | orchestrator | 2025-07-12 13:55:52 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:52.450368 | orchestrator | 2025-07-12 13:55:52 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:52.451884 | orchestrator | 2025-07-12 13:55:52 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:52.452007 | orchestrator | 2025-07-12 13:55:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:55.505947 | orchestrator | 2025-07-12 13:55:55 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:55.509842 | orchestrator | 2025-07-12 13:55:55 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:55.509881 | orchestrator | 2025-07-12 13:55:55 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:55.509894 | orchestrator | 2025-07-12 13:55:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:55:58.558436 | orchestrator | 2025-07-12 13:55:58 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:55:58.562361 | orchestrator | 2025-07-12 13:55:58 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:55:58.563316 | orchestrator | 2025-07-12 13:55:58 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:55:58.563343 | orchestrator | 2025-07-12 13:55:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:01.622708 | orchestrator | 2025-07-12 13:56:01 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:01.624235 | orchestrator | 2025-07-12 13:56:01 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:01.628908 | orchestrator | 2025-07-12 13:56:01 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:01.629233 | orchestrator | 2025-07-12 13:56:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:04.675944 | orchestrator | 2025-07-12 13:56:04 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:04.677317 | orchestrator | 2025-07-12 13:56:04 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:04.678974 | orchestrator | 2025-07-12 13:56:04 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:04.679588 | orchestrator | 2025-07-12 13:56:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:07.730001 | orchestrator | 2025-07-12 13:56:07 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:07.730761 | orchestrator | 2025-07-12 13:56:07 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:07.731519 | orchestrator | 2025-07-12 13:56:07 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:07.731749 | orchestrator | 2025-07-12 13:56:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:10.789871 | orchestrator | 2025-07-12 13:56:10 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:10.793934 | orchestrator | 2025-07-12 13:56:10 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:10.797675 | orchestrator | 2025-07-12 13:56:10 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:10.797706 | orchestrator | 2025-07-12 13:56:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:13.837648 | orchestrator | 2025-07-12 13:56:13 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:13.837755 | orchestrator | 2025-07-12 13:56:13 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:13.838590 | orchestrator | 2025-07-12 13:56:13 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:13.838612 | orchestrator | 2025-07-12 13:56:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:16.886954 | orchestrator | 2025-07-12 13:56:16 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:16.887754 | orchestrator | 2025-07-12 13:56:16 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:16.888730 | orchestrator | 2025-07-12 13:56:16 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:16.888753 | orchestrator | 2025-07-12 13:56:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:19.934736 | orchestrator | 2025-07-12 13:56:19 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:19.935342 | orchestrator | 2025-07-12 13:56:19 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:19.936639 | orchestrator | 2025-07-12 13:56:19 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:19.936664 | orchestrator | 2025-07-12 13:56:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:22.988350 | orchestrator | 2025-07-12 13:56:22 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:22.990405 | orchestrator | 2025-07-12 13:56:22 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:22.995243 | orchestrator | 2025-07-12 13:56:22 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:22.995566 | orchestrator | 2025-07-12 13:56:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:26.040794 | orchestrator | 2025-07-12 13:56:26 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:26.041423 | orchestrator | 2025-07-12 13:56:26 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:26.042662 | orchestrator | 2025-07-12 13:56:26 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:26.042762 | orchestrator | 2025-07-12 13:56:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:29.089546 | orchestrator | 2025-07-12 13:56:29 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:29.092130 | orchestrator | 2025-07-12 13:56:29 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:29.093902 | orchestrator | 2025-07-12 13:56:29 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:29.093942 | orchestrator | 2025-07-12 13:56:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:32.140245 | orchestrator | 2025-07-12 13:56:32 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:32.141471 | orchestrator | 2025-07-12 13:56:32 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:32.143002 | orchestrator | 2025-07-12 13:56:32 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:32.143029 | orchestrator | 2025-07-12 13:56:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:35.185847 | orchestrator | 2025-07-12 13:56:35 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:35.187273 | orchestrator | 2025-07-12 13:56:35 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:35.190256 | orchestrator | 2025-07-12 13:56:35 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:35.190348 | orchestrator | 2025-07-12 13:56:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:38.236224 | orchestrator | 2025-07-12 13:56:38 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:38.237541 | orchestrator | 2025-07-12 13:56:38 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:38.239074 | orchestrator | 2025-07-12 13:56:38 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:38.239398 | orchestrator | 2025-07-12 13:56:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:41.283128 | orchestrator | 2025-07-12 13:56:41 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:41.285010 | orchestrator | 2025-07-12 13:56:41 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:41.286448 | orchestrator | 2025-07-12 13:56:41 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state STARTED 2025-07-12 13:56:41.286478 | orchestrator | 2025-07-12 13:56:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:44.333740 | orchestrator | 2025-07-12 13:56:44 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:44.334792 | orchestrator | 2025-07-12 13:56:44 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:44.337722 | orchestrator | 2025-07-12 13:56:44 | INFO  | Task 45ea9250-404f-4660-a40c-541d3f19d55a is in state SUCCESS 2025-07-12 13:56:44.341210 | orchestrator | 2025-07-12 13:56:44.341350 | orchestrator | 2025-07-12 13:56:44.341920 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-07-12 13:56:44.341942 | orchestrator | 2025-07-12 13:56:44.341953 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 13:56:44.341965 | orchestrator | Saturday 12 July 2025 13:54:34 +0000 (0:00:00.588) 0:00:00.588 ********* 2025-07-12 13:56:44.341977 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:56:44.341989 | orchestrator | 2025-07-12 13:56:44.342000 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 13:56:44.342012 | orchestrator | Saturday 12 July 2025 13:54:35 +0000 (0:00:00.656) 0:00:01.244 ********* 2025-07-12 13:56:44.342082 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.342095 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.342106 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.342117 | orchestrator | 2025-07-12 13:56:44.342128 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 13:56:44.342140 | orchestrator | Saturday 12 July 2025 13:54:35 +0000 (0:00:00.658) 0:00:01.903 ********* 2025-07-12 13:56:44.342150 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.342162 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.342172 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.342183 | orchestrator | 2025-07-12 13:56:44.342194 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 13:56:44.342205 | orchestrator | Saturday 12 July 2025 13:54:36 +0000 (0:00:00.290) 0:00:02.193 ********* 2025-07-12 13:56:44.342216 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.342227 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.342238 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.342249 | orchestrator | 2025-07-12 13:56:44.342260 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 13:56:44.342271 | orchestrator | Saturday 12 July 2025 13:54:37 +0000 (0:00:00.808) 0:00:03.001 ********* 2025-07-12 13:56:44.342283 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.342294 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.342305 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.342315 | orchestrator | 2025-07-12 13:56:44.342326 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 13:56:44.342336 | orchestrator | Saturday 12 July 2025 13:54:37 +0000 (0:00:00.322) 0:00:03.323 ********* 2025-07-12 13:56:44.342347 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.342401 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.342413 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.342423 | orchestrator | 2025-07-12 13:56:44.342434 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 13:56:44.342445 | orchestrator | Saturday 12 July 2025 13:54:37 +0000 (0:00:00.330) 0:00:03.654 ********* 2025-07-12 13:56:44.342456 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.342467 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.342477 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.342488 | orchestrator | 2025-07-12 13:56:44.342499 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 13:56:44.342510 | orchestrator | Saturday 12 July 2025 13:54:38 +0000 (0:00:00.318) 0:00:03.972 ********* 2025-07-12 13:56:44.342521 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.342532 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.342545 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.342557 | orchestrator | 2025-07-12 13:56:44.342569 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 13:56:44.342581 | orchestrator | Saturday 12 July 2025 13:54:38 +0000 (0:00:00.499) 0:00:04.472 ********* 2025-07-12 13:56:44.342593 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.342605 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.342617 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.342629 | orchestrator | 2025-07-12 13:56:44.342642 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 13:56:44.342668 | orchestrator | Saturday 12 July 2025 13:54:38 +0000 (0:00:00.311) 0:00:04.783 ********* 2025-07-12 13:56:44.342680 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:56:44.342693 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:56:44.342706 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:56:44.342718 | orchestrator | 2025-07-12 13:56:44.342730 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 13:56:44.342742 | orchestrator | Saturday 12 July 2025 13:54:39 +0000 (0:00:00.653) 0:00:05.437 ********* 2025-07-12 13:56:44.342753 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.342766 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.342777 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.342789 | orchestrator | 2025-07-12 13:56:44.342801 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 13:56:44.342828 | orchestrator | Saturday 12 July 2025 13:54:39 +0000 (0:00:00.412) 0:00:05.850 ********* 2025-07-12 13:56:44.342840 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:56:44.342853 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:56:44.342865 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:56:44.342877 | orchestrator | 2025-07-12 13:56:44.342889 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 13:56:44.342900 | orchestrator | Saturday 12 July 2025 13:54:41 +0000 (0:00:02.113) 0:00:07.963 ********* 2025-07-12 13:56:44.342911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 13:56:44.342922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 13:56:44.342933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 13:56:44.342944 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.342954 | orchestrator | 2025-07-12 13:56:44.342965 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 13:56:44.343025 | orchestrator | Saturday 12 July 2025 13:54:42 +0000 (0:00:00.413) 0:00:08.377 ********* 2025-07-12 13:56:44.343041 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.343056 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.343067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.343079 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343090 | orchestrator | 2025-07-12 13:56:44.343101 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 13:56:44.343112 | orchestrator | Saturday 12 July 2025 13:54:43 +0000 (0:00:00.775) 0:00:09.152 ********* 2025-07-12 13:56:44.343126 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.343140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.343159 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.343170 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343181 | orchestrator | 2025-07-12 13:56:44.343192 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 13:56:44.343203 | orchestrator | Saturday 12 July 2025 13:54:43 +0000 (0:00:00.173) 0:00:09.325 ********* 2025-07-12 13:56:44.343217 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '34d417765272', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 13:54:40.538629', 'end': '2025-07-12 13:54:40.576678', 'delta': '0:00:00.038049', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['34d417765272'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-07-12 13:56:44.343238 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0c2dee3c442c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 13:54:41.285579', 'end': '2025-07-12 13:54:41.335270', 'delta': '0:00:00.049691', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0c2dee3c442c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-07-12 13:56:44.343283 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'dece65df6ad1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 13:54:41.825077', 'end': '2025-07-12 13:54:41.863479', 'delta': '0:00:00.038402', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['dece65df6ad1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-07-12 13:56:44.343297 | orchestrator | 2025-07-12 13:56:44.343308 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 13:56:44.343319 | orchestrator | Saturday 12 July 2025 13:54:43 +0000 (0:00:00.371) 0:00:09.697 ********* 2025-07-12 13:56:44.343330 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.343341 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.343411 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.343425 | orchestrator | 2025-07-12 13:56:44.343436 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 13:56:44.343446 | orchestrator | Saturday 12 July 2025 13:54:44 +0000 (0:00:00.430) 0:00:10.127 ********* 2025-07-12 13:56:44.343466 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-07-12 13:56:44.343477 | orchestrator | 2025-07-12 13:56:44.343488 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 13:56:44.343499 | orchestrator | Saturday 12 July 2025 13:54:46 +0000 (0:00:01.851) 0:00:11.979 ********* 2025-07-12 13:56:44.343510 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343521 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.343532 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.343542 | orchestrator | 2025-07-12 13:56:44.343553 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 13:56:44.343564 | orchestrator | Saturday 12 July 2025 13:54:46 +0000 (0:00:00.288) 0:00:12.267 ********* 2025-07-12 13:56:44.343575 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343586 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.343596 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.343607 | orchestrator | 2025-07-12 13:56:44.343618 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 13:56:44.343628 | orchestrator | Saturday 12 July 2025 13:54:46 +0000 (0:00:00.407) 0:00:12.675 ********* 2025-07-12 13:56:44.343639 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343650 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.343661 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.343672 | orchestrator | 2025-07-12 13:56:44.343683 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 13:56:44.343694 | orchestrator | Saturday 12 July 2025 13:54:47 +0000 (0:00:00.474) 0:00:13.149 ********* 2025-07-12 13:56:44.343705 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.343715 | orchestrator | 2025-07-12 13:56:44.343726 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 13:56:44.343737 | orchestrator | Saturday 12 July 2025 13:54:47 +0000 (0:00:00.154) 0:00:13.304 ********* 2025-07-12 13:56:44.343748 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343758 | orchestrator | 2025-07-12 13:56:44.343769 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 13:56:44.343780 | orchestrator | Saturday 12 July 2025 13:54:47 +0000 (0:00:00.232) 0:00:13.536 ********* 2025-07-12 13:56:44.343791 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343801 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.343812 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.343823 | orchestrator | 2025-07-12 13:56:44.343834 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 13:56:44.343845 | orchestrator | Saturday 12 July 2025 13:54:47 +0000 (0:00:00.290) 0:00:13.827 ********* 2025-07-12 13:56:44.343855 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343866 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.343877 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.343887 | orchestrator | 2025-07-12 13:56:44.343898 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 13:56:44.343909 | orchestrator | Saturday 12 July 2025 13:54:48 +0000 (0:00:00.324) 0:00:14.151 ********* 2025-07-12 13:56:44.343920 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343931 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.343941 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.343950 | orchestrator | 2025-07-12 13:56:44.343960 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 13:56:44.343975 | orchestrator | Saturday 12 July 2025 13:54:48 +0000 (0:00:00.506) 0:00:14.658 ********* 2025-07-12 13:56:44.343984 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.343994 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.344004 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.344013 | orchestrator | 2025-07-12 13:56:44.344023 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 13:56:44.344038 | orchestrator | Saturday 12 July 2025 13:54:49 +0000 (0:00:00.325) 0:00:14.984 ********* 2025-07-12 13:56:44.344048 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.344058 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.344067 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.344077 | orchestrator | 2025-07-12 13:56:44.344086 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 13:56:44.344096 | orchestrator | Saturday 12 July 2025 13:54:49 +0000 (0:00:00.340) 0:00:15.325 ********* 2025-07-12 13:56:44.344105 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.344115 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.344125 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.344134 | orchestrator | 2025-07-12 13:56:44.344144 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 13:56:44.344184 | orchestrator | Saturday 12 July 2025 13:54:49 +0000 (0:00:00.328) 0:00:15.653 ********* 2025-07-12 13:56:44.344196 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.344206 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.344215 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.344225 | orchestrator | 2025-07-12 13:56:44.344234 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 13:56:44.344244 | orchestrator | Saturday 12 July 2025 13:54:50 +0000 (0:00:00.507) 0:00:16.160 ********* 2025-07-12 13:56:44.344255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09698b4c--8482--58a0--ad33--d3500ef3a9f7-osd--block--09698b4c--8482--58a0--ad33--d3500ef3a9f7', 'dm-uuid-LVM-4MF8FKekAfibsfbuuKjfJMplsjoYqjph0Xzt3sPd98YeKxR2QYYiQusPioenEqOL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f35471dc--23d0--5222--b540--93882fae0f69-osd--block--f35471dc--23d0--5222--b540--93882fae0f69', 'dm-uuid-LVM-bfTaqVa88Rh4Nequz5jEWqhv8Td4ZmNEk6j5EzVds24XoTOrltYM7dL2Lhbdua3t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part1', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part14', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part15', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part16', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f88c8806--82e1--5c41--a829--e62dc4a8fdb6-osd--block--f88c8806--82e1--5c41--a829--e62dc4a8fdb6', 'dm-uuid-LVM-PL6sVvcXnMQc2eiNHfOUI24TaeNmZfwUZwuYEvpVd1ZPqfmSI02R1EW4iawYgKm3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--09698b4c--8482--58a0--ad33--d3500ef3a9f7-osd--block--09698b4c--8482--58a0--ad33--d3500ef3a9f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7JkHHe-bttj-aJwz-4iXT-Ljd7-kKVl-eKVWMP', 'scsi-0QEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830', 'scsi-SQEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbedf305--2fae--5605--926c--96a21a5245d1-osd--block--fbedf305--2fae--5605--926c--96a21a5245d1', 'dm-uuid-LVM-RL6JixEV7A5I01cMNuWGdtUMze3uy7fwReT9hUfFvSByD1xD02QmFCPfrfxrr2bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f35471dc--23d0--5222--b540--93882fae0f69-osd--block--f35471dc--23d0--5222--b540--93882fae0f69'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2txnhq-Fhyu-kyj7-iRya-mECk-ZjRq-xPZGdV', 'scsi-0QEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767', 'scsi-SQEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac', 'scsi-SQEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344611 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.344621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f88c8806--82e1--5c41--a829--e62dc4a8fdb6-osd--block--f88c8806--82e1--5c41--a829--e62dc4a8fdb6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C0wuZN-oRBc-0l8h-zfMZ-pRfR-pgPn-zQO3yO', 'scsi-0QEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98', 'scsi-SQEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2177925c--0e94--5467--9f04--b37733dbe47a-osd--block--2177925c--0e94--5467--9f04--b37733dbe47a', 'dm-uuid-LVM-HhJf71qEjqPRC94IO3h96dIc0QoGrWborFvpLuXK7q9owoecVv6ZEWdnKpmoS0BU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fbedf305--2fae--5605--926c--96a21a5245d1-osd--block--fbedf305--2fae--5605--926c--96a21a5245d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NvbKyO-TkVx-bBB8-gBSa-V1TF-r7kw-A91xhV', 'scsi-0QEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa', 'scsi-SQEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10b3d195--009d--5006--b5f6--1b7aa1316d97-osd--block--10b3d195--009d--5006--b5f6--1b7aa1316d97', 'dm-uuid-LVM-G4QXe0RydoR02C1cjl3dfZHdcG2JRzgBfeSFfktNF4Pd0AIxdth5Rk39VfMqiFDg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1', 'scsi-SQEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344815 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.344825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 13:56:44.344915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2177925c--0e94--5467--9f04--b37733dbe47a-osd--block--2177925c--0e94--5467--9f04--b37733dbe47a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i613m1-lBdX-HvMf-f2aJ-l1zY-Nwc5-iTWCrE', 'scsi-0QEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094', 'scsi-SQEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--10b3d195--009d--5006--b5f6--1b7aa1316d97-osd--block--10b3d195--009d--5006--b5f6--1b7aa1316d97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gKK1g8-CQNw-FbmJ-foMT-xxHz-dmhJ-1Q4lcD', 'scsi-0QEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f', 'scsi-SQEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29', 'scsi-SQEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 13:56:44.344985 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.344995 | orchestrator | 2025-07-12 13:56:44.345005 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 13:56:44.345015 | orchestrator | Saturday 12 July 2025 13:54:50 +0000 (0:00:00.544) 0:00:16.705 ********* 2025-07-12 13:56:44.345025 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09698b4c--8482--58a0--ad33--d3500ef3a9f7-osd--block--09698b4c--8482--58a0--ad33--d3500ef3a9f7', 'dm-uuid-LVM-4MF8FKekAfibsfbuuKjfJMplsjoYqjph0Xzt3sPd98YeKxR2QYYiQusPioenEqOL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345036 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f35471dc--23d0--5222--b540--93882fae0f69-osd--block--f35471dc--23d0--5222--b540--93882fae0f69', 'dm-uuid-LVM-bfTaqVa88Rh4Nequz5jEWqhv8Td4ZmNEk6j5EzVds24XoTOrltYM7dL2Lhbdua3t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345054 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345115 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345157 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f88c8806--82e1--5c41--a829--e62dc4a8fdb6-osd--block--f88c8806--82e1--5c41--a829--e62dc4a8fdb6', 'dm-uuid-LVM-PL6sVvcXnMQc2eiNHfOUI24TaeNmZfwUZwuYEvpVd1ZPqfmSI02R1EW4iawYgKm3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part1', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part14', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part15', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part16', 'scsi-SQEMU_QEMU_HARDDISK_c96a506e-4f4f-4467-9080-6e4031891f49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345192 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fbedf305--2fae--5605--926c--96a21a5245d1-osd--block--fbedf305--2fae--5605--926c--96a21a5245d1', 'dm-uuid-LVM-RL6JixEV7A5I01cMNuWGdtUMze3uy7fwReT9hUfFvSByD1xD02QmFCPfrfxrr2bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--09698b4c--8482--58a0--ad33--d3500ef3a9f7-osd--block--09698b4c--8482--58a0--ad33--d3500ef3a9f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7JkHHe-bttj-aJwz-4iXT-Ljd7-kKVl-eKVWMP', 'scsi-0QEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830', 'scsi-SQEMU_QEMU_HARDDISK_ae608c05-0dbb-4002-aca8-8a9a246fd830'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f35471dc--23d0--5222--b540--93882fae0f69-osd--block--f35471dc--23d0--5222--b540--93882fae0f69'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2txnhq-Fhyu-kyj7-iRya-mECk-ZjRq-xPZGdV', 'scsi-0QEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767', 'scsi-SQEMU_QEMU_HARDDISK_910ce96f-e512-4ca8-91f5-259aab453767'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345245 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345261 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac', 'scsi-SQEMU_QEMU_HARDDISK_657fd216-2be4-4730-9631-748e74f421ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345299 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.345315 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345326 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345342 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2177925c--0e94--5467--9f04--b37733dbe47a-osd--block--2177925c--0e94--5467--9f04--b37733dbe47a', 'dm-uuid-LVM-HhJf71qEjqPRC94IO3h96dIc0QoGrWborFvpLuXK7q9owoecVv6ZEWdnKpmoS0BU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345369 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10b3d195--009d--5006--b5f6--1b7aa1316d97-osd--block--10b3d195--009d--5006--b5f6--1b7aa1316d97', 'dm-uuid-LVM-G4QXe0RydoR02C1cjl3dfZHdcG2JRzgBfeSFfktNF4Pd0AIxdth5Rk39VfMqiFDg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345395 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345422 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345437 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f7c4103-b7a1-40b5-b240-8feb842c5041-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f88c8806--82e1--5c41--a829--e62dc4a8fdb6-osd--block--f88c8806--82e1--5c41--a829--e62dc4a8fdb6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-C0wuZN-oRBc-0l8h-zfMZ-pRfR-pgPn-zQO3yO', 'scsi-0QEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98', 'scsi-SQEMU_QEMU_HARDDISK_f0941989-f7a4-4554-ad13-0c2066939c98'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345507 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fbedf305--2fae--5605--926c--96a21a5245d1-osd--block--fbedf305--2fae--5605--926c--96a21a5245d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NvbKyO-TkVx-bBB8-gBSa-V1TF-r7kw-A91xhV', 'scsi-0QEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa', 'scsi-SQEMU_QEMU_HARDDISK_6157a0e8-ea5c-4f54-9d28-af3024f948aa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345537 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1', 'scsi-SQEMU_QEMU_HARDDISK_164e6fa7-4d5f-42f9-ad9a-1ba332eaeca1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345574 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.345584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345594 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345618 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce0b842b-4d26-4f39-a7a5-95396abdad92-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345635 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2177925c--0e94--5467--9f04--b37733dbe47a-osd--block--2177925c--0e94--5467--9f04--b37733dbe47a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i613m1-lBdX-HvMf-f2aJ-l1zY-Nwc5-iTWCrE', 'scsi-0QEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094', 'scsi-SQEMU_QEMU_HARDDISK_73295db5-c3fe-42a7-9e6b-efb6b935a094'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--10b3d195--009d--5006--b5f6--1b7aa1316d97-osd--block--10b3d195--009d--5006--b5f6--1b7aa1316d97'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gKK1g8-CQNw-FbmJ-foMT-xxHz-dmhJ-1Q4lcD', 'scsi-0QEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f', 'scsi-SQEMU_QEMU_HARDDISK_ce974423-4fe6-4a7d-9a96-297586e8ac2f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345660 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29', 'scsi-SQEMU_QEMU_HARDDISK_584411ea-1998-4909-85e4-828e969f2c29'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-13-00-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 13:56:44.345694 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.345704 | orchestrator | 2025-07-12 13:56:44.345713 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 13:56:44.345723 | orchestrator | Saturday 12 July 2025 13:54:51 +0000 (0:00:00.581) 0:00:17.286 ********* 2025-07-12 13:56:44.345734 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.345743 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.345753 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.345763 | orchestrator | 2025-07-12 13:56:44.345772 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 13:56:44.345782 | orchestrator | Saturday 12 July 2025 13:54:52 +0000 (0:00:00.761) 0:00:18.048 ********* 2025-07-12 13:56:44.345792 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.345802 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.345811 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.345821 | orchestrator | 2025-07-12 13:56:44.345830 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 13:56:44.345840 | orchestrator | Saturday 12 July 2025 13:54:52 +0000 (0:00:00.492) 0:00:18.540 ********* 2025-07-12 13:56:44.345850 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.345860 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.345869 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.345879 | orchestrator | 2025-07-12 13:56:44.345888 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 13:56:44.345898 | orchestrator | Saturday 12 July 2025 13:54:53 +0000 (0:00:00.678) 0:00:19.218 ********* 2025-07-12 13:56:44.345908 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.345917 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.345927 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.345937 | orchestrator | 2025-07-12 13:56:44.345946 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 13:56:44.345956 | orchestrator | Saturday 12 July 2025 13:54:53 +0000 (0:00:00.307) 0:00:19.526 ********* 2025-07-12 13:56:44.345965 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.345975 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.345985 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.345994 | orchestrator | 2025-07-12 13:56:44.346004 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 13:56:44.346039 | orchestrator | Saturday 12 July 2025 13:54:53 +0000 (0:00:00.407) 0:00:19.933 ********* 2025-07-12 13:56:44.346052 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.346061 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.346071 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.346081 | orchestrator | 2025-07-12 13:56:44.346090 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 13:56:44.346100 | orchestrator | Saturday 12 July 2025 13:54:54 +0000 (0:00:00.539) 0:00:20.473 ********* 2025-07-12 13:56:44.346109 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 13:56:44.346119 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 13:56:44.346129 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 13:56:44.346138 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 13:56:44.346148 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 13:56:44.346157 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 13:56:44.346166 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 13:56:44.346176 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 13:56:44.346185 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 13:56:44.346195 | orchestrator | 2025-07-12 13:56:44.346204 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 13:56:44.346214 | orchestrator | Saturday 12 July 2025 13:54:55 +0000 (0:00:00.842) 0:00:21.315 ********* 2025-07-12 13:56:44.346229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 13:56:44.346238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 13:56:44.346248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 13:56:44.346257 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.346267 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 13:56:44.346276 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 13:56:44.346286 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 13:56:44.346295 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.346309 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 13:56:44.346319 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 13:56:44.346328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 13:56:44.346337 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.346347 | orchestrator | 2025-07-12 13:56:44.346405 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 13:56:44.346415 | orchestrator | Saturday 12 July 2025 13:54:55 +0000 (0:00:00.344) 0:00:21.659 ********* 2025-07-12 13:56:44.346425 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 13:56:44.346435 | orchestrator | 2025-07-12 13:56:44.346444 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 13:56:44.346455 | orchestrator | Saturday 12 July 2025 13:54:56 +0000 (0:00:00.745) 0:00:22.404 ********* 2025-07-12 13:56:44.346464 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.346474 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.346483 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.346493 | orchestrator | 2025-07-12 13:56:44.346509 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 13:56:44.346519 | orchestrator | Saturday 12 July 2025 13:54:56 +0000 (0:00:00.367) 0:00:22.772 ********* 2025-07-12 13:56:44.346529 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.346538 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.346548 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.346558 | orchestrator | 2025-07-12 13:56:44.346567 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 13:56:44.346577 | orchestrator | Saturday 12 July 2025 13:54:57 +0000 (0:00:00.376) 0:00:23.148 ********* 2025-07-12 13:56:44.346587 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.346596 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.346606 | orchestrator | skipping: [testbed-node-5] 2025-07-12 13:56:44.346616 | orchestrator | 2025-07-12 13:56:44.346625 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 13:56:44.346635 | orchestrator | Saturday 12 July 2025 13:54:57 +0000 (0:00:00.338) 0:00:23.487 ********* 2025-07-12 13:56:44.346645 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.346654 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.346664 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.346674 | orchestrator | 2025-07-12 13:56:44.346683 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 13:56:44.346692 | orchestrator | Saturday 12 July 2025 13:54:58 +0000 (0:00:00.726) 0:00:24.213 ********* 2025-07-12 13:56:44.346700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:56:44.346708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:56:44.346716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:56:44.346724 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.346732 | orchestrator | 2025-07-12 13:56:44.346739 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 13:56:44.346747 | orchestrator | Saturday 12 July 2025 13:54:58 +0000 (0:00:00.380) 0:00:24.594 ********* 2025-07-12 13:56:44.346759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:56:44.346767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:56:44.346775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:56:44.346783 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.346790 | orchestrator | 2025-07-12 13:56:44.346798 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 13:56:44.346806 | orchestrator | Saturday 12 July 2025 13:54:59 +0000 (0:00:00.381) 0:00:24.975 ********* 2025-07-12 13:56:44.346813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 13:56:44.346821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 13:56:44.346829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 13:56:44.346837 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.346845 | orchestrator | 2025-07-12 13:56:44.346852 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 13:56:44.346860 | orchestrator | Saturday 12 July 2025 13:54:59 +0000 (0:00:00.401) 0:00:25.377 ********* 2025-07-12 13:56:44.346868 | orchestrator | ok: [testbed-node-3] 2025-07-12 13:56:44.346876 | orchestrator | ok: [testbed-node-4] 2025-07-12 13:56:44.346884 | orchestrator | ok: [testbed-node-5] 2025-07-12 13:56:44.346892 | orchestrator | 2025-07-12 13:56:44.346899 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 13:56:44.346907 | orchestrator | Saturday 12 July 2025 13:54:59 +0000 (0:00:00.322) 0:00:25.700 ********* 2025-07-12 13:56:44.346915 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 13:56:44.346923 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 13:56:44.346931 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 13:56:44.346938 | orchestrator | 2025-07-12 13:56:44.346946 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 13:56:44.346954 | orchestrator | Saturday 12 July 2025 13:55:00 +0000 (0:00:00.520) 0:00:26.221 ********* 2025-07-12 13:56:44.346962 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:56:44.346969 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:56:44.346977 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:56:44.346985 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 13:56:44.346993 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 13:56:44.347001 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 13:56:44.347015 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 13:56:44.347023 | orchestrator | 2025-07-12 13:56:44.347031 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 13:56:44.347038 | orchestrator | Saturday 12 July 2025 13:55:01 +0000 (0:00:01.019) 0:00:27.241 ********* 2025-07-12 13:56:44.347046 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 13:56:44.347054 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 13:56:44.347062 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 13:56:44.347070 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 13:56:44.347077 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 13:56:44.347085 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 13:56:44.347093 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 13:56:44.347101 | orchestrator | 2025-07-12 13:56:44.347112 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-07-12 13:56:44.347131 | orchestrator | Saturday 12 July 2025 13:55:03 +0000 (0:00:02.004) 0:00:29.246 ********* 2025-07-12 13:56:44.347143 | orchestrator | skipping: [testbed-node-3] 2025-07-12 13:56:44.347156 | orchestrator | skipping: [testbed-node-4] 2025-07-12 13:56:44.347169 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-07-12 13:56:44.347182 | orchestrator | 2025-07-12 13:56:44.347195 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-07-12 13:56:44.347205 | orchestrator | Saturday 12 July 2025 13:55:03 +0000 (0:00:00.408) 0:00:29.654 ********* 2025-07-12 13:56:44.347214 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:44.347223 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:44.347231 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:44.347239 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:44.347248 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 13:56:44.347256 | orchestrator | 2025-07-12 13:56:44.347264 | orchestrator | TASK [generate keys] *********************************************************** 2025-07-12 13:56:44.347272 | orchestrator | Saturday 12 July 2025 13:55:48 +0000 (0:00:45.202) 0:01:14.857 ********* 2025-07-12 13:56:44.347280 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347288 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347295 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347303 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347311 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347319 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347327 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-07-12 13:56:44.347335 | orchestrator | 2025-07-12 13:56:44.347342 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-07-12 13:56:44.347350 | orchestrator | Saturday 12 July 2025 13:56:13 +0000 (0:00:24.369) 0:01:39.227 ********* 2025-07-12 13:56:44.347380 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347388 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347396 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347404 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347411 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347429 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347437 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 13:56:44.347445 | orchestrator | 2025-07-12 13:56:44.347453 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-07-12 13:56:44.347461 | orchestrator | Saturday 12 July 2025 13:56:25 +0000 (0:00:12.391) 0:01:51.619 ********* 2025-07-12 13:56:44.347468 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347476 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:44.347484 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:44.347492 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347499 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:44.347507 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:44.347520 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347528 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:44.347536 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:44.347544 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347551 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:44.347559 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:44.347567 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347575 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:44.347582 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:44.347590 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 13:56:44.347598 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 13:56:44.347605 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 13:56:44.347613 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-07-12 13:56:44.347621 | orchestrator | 2025-07-12 13:56:44.347629 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:56:44.347637 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-07-12 13:56:44.347646 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 13:56:44.347654 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 13:56:44.347662 | orchestrator | 2025-07-12 13:56:44.347670 | orchestrator | 2025-07-12 13:56:44.347677 | orchestrator | 2025-07-12 13:56:44.347685 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:56:44.347693 | orchestrator | Saturday 12 July 2025 13:56:43 +0000 (0:00:17.659) 0:02:09.279 ********* 2025-07-12 13:56:44.347700 | orchestrator | =============================================================================== 2025-07-12 13:56:44.347708 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.20s 2025-07-12 13:56:44.347716 | orchestrator | generate keys ---------------------------------------------------------- 24.37s 2025-07-12 13:56:44.347724 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.66s 2025-07-12 13:56:44.347737 | orchestrator | get keys from monitors ------------------------------------------------- 12.39s 2025-07-12 13:56:44.347745 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.11s 2025-07-12 13:56:44.347753 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.00s 2025-07-12 13:56:44.347760 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.85s 2025-07-12 13:56:44.347768 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2025-07-12 13:56:44.347776 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-07-12 13:56:44.347784 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2025-07-12 13:56:44.347791 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2025-07-12 13:56:44.347799 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.76s 2025-07-12 13:56:44.347807 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.75s 2025-07-12 13:56:44.347815 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.73s 2025-07-12 13:56:44.347822 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2025-07-12 13:56:44.347830 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2025-07-12 13:56:44.347838 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.66s 2025-07-12 13:56:44.347849 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2025-07-12 13:56:44.347857 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-07-12 13:56:44.347865 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.54s 2025-07-12 13:56:44.347873 | orchestrator | 2025-07-12 13:56:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:47.387435 | orchestrator | 2025-07-12 13:56:47 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:47.389213 | orchestrator | 2025-07-12 13:56:47 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:47.391185 | orchestrator | 2025-07-12 13:56:47 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state STARTED 2025-07-12 13:56:47.391218 | orchestrator | 2025-07-12 13:56:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:50.433164 | orchestrator | 2025-07-12 13:56:50 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:50.435295 | orchestrator | 2025-07-12 13:56:50 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:50.436798 | orchestrator | 2025-07-12 13:56:50 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state STARTED 2025-07-12 13:56:50.436832 | orchestrator | 2025-07-12 13:56:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:53.484064 | orchestrator | 2025-07-12 13:56:53 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:53.485462 | orchestrator | 2025-07-12 13:56:53 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:53.492868 | orchestrator | 2025-07-12 13:56:53 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state STARTED 2025-07-12 13:56:53.492897 | orchestrator | 2025-07-12 13:56:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:56.533775 | orchestrator | 2025-07-12 13:56:56 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:56.534931 | orchestrator | 2025-07-12 13:56:56 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:56.537238 | orchestrator | 2025-07-12 13:56:56 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state STARTED 2025-07-12 13:56:56.537599 | orchestrator | 2025-07-12 13:56:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:56:59.585731 | orchestrator | 2025-07-12 13:56:59 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:56:59.587304 | orchestrator | 2025-07-12 13:56:59 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:56:59.588178 | orchestrator | 2025-07-12 13:56:59 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state STARTED 2025-07-12 13:56:59.588486 | orchestrator | 2025-07-12 13:56:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:02.664010 | orchestrator | 2025-07-12 13:57:02 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:57:02.664106 | orchestrator | 2025-07-12 13:57:02 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:02.666124 | orchestrator | 2025-07-12 13:57:02 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state STARTED 2025-07-12 13:57:02.666641 | orchestrator | 2025-07-12 13:57:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:05.721192 | orchestrator | 2025-07-12 13:57:05 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:57:05.721751 | orchestrator | 2025-07-12 13:57:05 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:05.723691 | orchestrator | 2025-07-12 13:57:05 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state STARTED 2025-07-12 13:57:05.723964 | orchestrator | 2025-07-12 13:57:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:08.768888 | orchestrator | 2025-07-12 13:57:08 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state STARTED 2025-07-12 13:57:08.771040 | orchestrator | 2025-07-12 13:57:08 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:08.774937 | orchestrator | 2025-07-12 13:57:08 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state STARTED 2025-07-12 13:57:08.774982 | orchestrator | 2025-07-12 13:57:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:11.831208 | orchestrator | 2025-07-12 13:57:11 | INFO  | Task bb142c5b-f4e0-4867-8402-0434c15dd478 is in state SUCCESS 2025-07-12 13:57:11.832189 | orchestrator | 2025-07-12 13:57:11.832226 | orchestrator | 2025-07-12 13:57:11.832646 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:57:11.832673 | orchestrator | 2025-07-12 13:57:11.832685 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:57:11.832696 | orchestrator | Saturday 12 July 2025 13:55:24 +0000 (0:00:00.291) 0:00:00.291 ********* 2025-07-12 13:57:11.832708 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.832720 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.832730 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.832741 | orchestrator | 2025-07-12 13:57:11.832753 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:57:11.832764 | orchestrator | Saturday 12 July 2025 13:55:24 +0000 (0:00:00.301) 0:00:00.593 ********* 2025-07-12 13:57:11.832775 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-07-12 13:57:11.832786 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-07-12 13:57:11.832797 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-07-12 13:57:11.832807 | orchestrator | 2025-07-12 13:57:11.832818 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-07-12 13:57:11.832829 | orchestrator | 2025-07-12 13:57:11.832840 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:11.832852 | orchestrator | Saturday 12 July 2025 13:55:25 +0000 (0:00:00.433) 0:00:01.026 ********* 2025-07-12 13:57:11.832889 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:57:11.832901 | orchestrator | 2025-07-12 13:57:11.832912 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-07-12 13:57:11.832923 | orchestrator | Saturday 12 July 2025 13:55:25 +0000 (0:00:00.554) 0:00:01.580 ********* 2025-07-12 13:57:11.832940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:11.832974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:11.833095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:11.833121 | orchestrator | 2025-07-12 13:57:11.833132 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-07-12 13:57:11.833144 | orchestrator | Saturday 12 July 2025 13:55:26 +0000 (0:00:01.072) 0:00:02.653 ********* 2025-07-12 13:57:11.833155 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.833166 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.833177 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.833187 | orchestrator | 2025-07-12 13:57:11.833198 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:11.833209 | orchestrator | Saturday 12 July 2025 13:55:27 +0000 (0:00:00.454) 0:00:03.107 ********* 2025-07-12 13:57:11.833220 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 13:57:11.833247 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 13:57:11.833261 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 13:57:11.833273 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 13:57:11.833285 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 13:57:11.833305 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 13:57:11.833316 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-07-12 13:57:11.833327 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 13:57:11.833366 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 13:57:11.833378 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 13:57:11.833389 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 13:57:11.833400 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 13:57:11.833410 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 13:57:11.833421 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 13:57:11.833432 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-07-12 13:57:11.833443 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 13:57:11.833454 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 13:57:11.833464 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 13:57:11.833475 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 13:57:11.833485 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 13:57:11.833496 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 13:57:11.833507 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 13:57:11.833517 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-07-12 13:57:11.833528 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 13:57:11.833540 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-07-12 13:57:11.833552 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-07-12 13:57:11.833563 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-07-12 13:57:11.833574 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-07-12 13:57:11.833585 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-07-12 13:57:11.833596 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-07-12 13:57:11.833607 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-07-12 13:57:11.833617 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-07-12 13:57:11.833628 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-07-12 13:57:11.833640 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-07-12 13:57:11.833657 | orchestrator | 2025-07-12 13:57:11.833669 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.833680 | orchestrator | Saturday 12 July 2025 13:55:28 +0000 (0:00:00.800) 0:00:03.908 ********* 2025-07-12 13:57:11.833690 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.833701 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.833712 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.833723 | orchestrator | 2025-07-12 13:57:11.833734 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.833745 | orchestrator | Saturday 12 July 2025 13:55:28 +0000 (0:00:00.326) 0:00:04.234 ********* 2025-07-12 13:57:11.833756 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.833767 | orchestrator | 2025-07-12 13:57:11.833788 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.833800 | orchestrator | Saturday 12 July 2025 13:55:28 +0000 (0:00:00.126) 0:00:04.361 ********* 2025-07-12 13:57:11.833811 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.833821 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.833933 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.833946 | orchestrator | 2025-07-12 13:57:11.833957 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.833967 | orchestrator | Saturday 12 July 2025 13:55:29 +0000 (0:00:00.546) 0:00:04.907 ********* 2025-07-12 13:57:11.833978 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.833989 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.834000 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.834011 | orchestrator | 2025-07-12 13:57:11.834104 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.834116 | orchestrator | Saturday 12 July 2025 13:55:29 +0000 (0:00:00.317) 0:00:05.225 ********* 2025-07-12 13:57:11.834126 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834137 | orchestrator | 2025-07-12 13:57:11.834148 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.834159 | orchestrator | Saturday 12 July 2025 13:55:29 +0000 (0:00:00.130) 0:00:05.356 ********* 2025-07-12 13:57:11.834169 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834180 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.834191 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.834202 | orchestrator | 2025-07-12 13:57:11.834213 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.834224 | orchestrator | Saturday 12 July 2025 13:55:29 +0000 (0:00:00.303) 0:00:05.660 ********* 2025-07-12 13:57:11.834234 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.834245 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.834256 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.834267 | orchestrator | 2025-07-12 13:57:11.834277 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.834288 | orchestrator | Saturday 12 July 2025 13:55:30 +0000 (0:00:00.380) 0:00:06.041 ********* 2025-07-12 13:57:11.834299 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834310 | orchestrator | 2025-07-12 13:57:11.834320 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.834331 | orchestrator | Saturday 12 July 2025 13:55:30 +0000 (0:00:00.433) 0:00:06.474 ********* 2025-07-12 13:57:11.834361 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834372 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.834383 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.834394 | orchestrator | 2025-07-12 13:57:11.834405 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.834416 | orchestrator | Saturday 12 July 2025 13:55:31 +0000 (0:00:00.331) 0:00:06.806 ********* 2025-07-12 13:57:11.834427 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.834438 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.834449 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.834470 | orchestrator | 2025-07-12 13:57:11.834481 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.834492 | orchestrator | Saturday 12 July 2025 13:55:31 +0000 (0:00:00.286) 0:00:07.092 ********* 2025-07-12 13:57:11.834503 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834513 | orchestrator | 2025-07-12 13:57:11.834524 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.834535 | orchestrator | Saturday 12 July 2025 13:55:31 +0000 (0:00:00.128) 0:00:07.221 ********* 2025-07-12 13:57:11.834546 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834559 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.834571 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.834583 | orchestrator | 2025-07-12 13:57:11.834595 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.834607 | orchestrator | Saturday 12 July 2025 13:55:31 +0000 (0:00:00.283) 0:00:07.505 ********* 2025-07-12 13:57:11.834619 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.834631 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.834642 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.834654 | orchestrator | 2025-07-12 13:57:11.834667 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.834679 | orchestrator | Saturday 12 July 2025 13:55:32 +0000 (0:00:00.530) 0:00:08.035 ********* 2025-07-12 13:57:11.834692 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834704 | orchestrator | 2025-07-12 13:57:11.834715 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.834727 | orchestrator | Saturday 12 July 2025 13:55:32 +0000 (0:00:00.137) 0:00:08.173 ********* 2025-07-12 13:57:11.834739 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834751 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.834763 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.834775 | orchestrator | 2025-07-12 13:57:11.834787 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.834799 | orchestrator | Saturday 12 July 2025 13:55:32 +0000 (0:00:00.315) 0:00:08.488 ********* 2025-07-12 13:57:11.834811 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.834823 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.834835 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.834847 | orchestrator | 2025-07-12 13:57:11.834859 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.834870 | orchestrator | Saturday 12 July 2025 13:55:33 +0000 (0:00:00.380) 0:00:08.869 ********* 2025-07-12 13:57:11.834881 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834892 | orchestrator | 2025-07-12 13:57:11.834902 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.834913 | orchestrator | Saturday 12 July 2025 13:55:33 +0000 (0:00:00.116) 0:00:08.985 ********* 2025-07-12 13:57:11.834924 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.834934 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.834945 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.834956 | orchestrator | 2025-07-12 13:57:11.834966 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.834986 | orchestrator | Saturday 12 July 2025 13:55:33 +0000 (0:00:00.481) 0:00:09.467 ********* 2025-07-12 13:57:11.834998 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.835014 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.835025 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.835036 | orchestrator | 2025-07-12 13:57:11.835047 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.835058 | orchestrator | Saturday 12 July 2025 13:55:34 +0000 (0:00:00.332) 0:00:09.799 ********* 2025-07-12 13:57:11.835069 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.835080 | orchestrator | 2025-07-12 13:57:11.835091 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.835102 | orchestrator | Saturday 12 July 2025 13:55:34 +0000 (0:00:00.147) 0:00:09.946 ********* 2025-07-12 13:57:11.835119 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.835130 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.835141 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.835152 | orchestrator | 2025-07-12 13:57:11.835163 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.835174 | orchestrator | Saturday 12 July 2025 13:55:34 +0000 (0:00:00.301) 0:00:10.247 ********* 2025-07-12 13:57:11.835184 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.835195 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.835206 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.835217 | orchestrator | 2025-07-12 13:57:11.835228 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.835239 | orchestrator | Saturday 12 July 2025 13:55:34 +0000 (0:00:00.331) 0:00:10.579 ********* 2025-07-12 13:57:11.835249 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.835260 | orchestrator | 2025-07-12 13:57:11.835271 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.835282 | orchestrator | Saturday 12 July 2025 13:55:35 +0000 (0:00:00.127) 0:00:10.706 ********* 2025-07-12 13:57:11.835292 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.835303 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.835314 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.835325 | orchestrator | 2025-07-12 13:57:11.835400 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.835414 | orchestrator | Saturday 12 July 2025 13:55:35 +0000 (0:00:00.540) 0:00:11.247 ********* 2025-07-12 13:57:11.835425 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.835436 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.835446 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.835457 | orchestrator | 2025-07-12 13:57:11.835468 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.835478 | orchestrator | Saturday 12 July 2025 13:55:35 +0000 (0:00:00.357) 0:00:11.604 ********* 2025-07-12 13:57:11.835489 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.835500 | orchestrator | 2025-07-12 13:57:11.835511 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.835522 | orchestrator | Saturday 12 July 2025 13:55:36 +0000 (0:00:00.145) 0:00:11.750 ********* 2025-07-12 13:57:11.835532 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.835543 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.835554 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.835565 | orchestrator | 2025-07-12 13:57:11.835576 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 13:57:11.835587 | orchestrator | Saturday 12 July 2025 13:55:36 +0000 (0:00:00.314) 0:00:12.064 ********* 2025-07-12 13:57:11.835597 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:57:11.835608 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:57:11.835619 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:57:11.835630 | orchestrator | 2025-07-12 13:57:11.835641 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 13:57:11.835651 | orchestrator | Saturday 12 July 2025 13:55:36 +0000 (0:00:00.551) 0:00:12.615 ********* 2025-07-12 13:57:11.835660 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.835670 | orchestrator | 2025-07-12 13:57:11.835679 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 13:57:11.835689 | orchestrator | Saturday 12 July 2025 13:55:37 +0000 (0:00:00.155) 0:00:12.770 ********* 2025-07-12 13:57:11.835699 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.835708 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.835718 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.835727 | orchestrator | 2025-07-12 13:57:11.835737 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-07-12 13:57:11.835746 | orchestrator | Saturday 12 July 2025 13:55:37 +0000 (0:00:00.311) 0:00:13.082 ********* 2025-07-12 13:57:11.835762 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:57:11.835772 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:11.835781 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:57:11.835791 | orchestrator | 2025-07-12 13:57:11.835800 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-07-12 13:57:11.835810 | orchestrator | Saturday 12 July 2025 13:55:38 +0000 (0:00:01.580) 0:00:14.663 ********* 2025-07-12 13:57:11.835820 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 13:57:11.835829 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 13:57:11.835839 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 13:57:11.835849 | orchestrator | 2025-07-12 13:57:11.835858 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-07-12 13:57:11.835868 | orchestrator | Saturday 12 July 2025 13:55:41 +0000 (0:00:02.243) 0:00:16.906 ********* 2025-07-12 13:57:11.835877 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 13:57:11.835887 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 13:57:11.835897 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 13:57:11.835906 | orchestrator | 2025-07-12 13:57:11.835916 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-07-12 13:57:11.835936 | orchestrator | Saturday 12 July 2025 13:55:43 +0000 (0:00:02.338) 0:00:19.245 ********* 2025-07-12 13:57:11.835946 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 13:57:11.835956 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 13:57:11.835966 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 13:57:11.835975 | orchestrator | 2025-07-12 13:57:11.835985 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-07-12 13:57:11.835994 | orchestrator | Saturday 12 July 2025 13:55:45 +0000 (0:00:01.508) 0:00:20.753 ********* 2025-07-12 13:57:11.836004 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.836013 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.836023 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.836032 | orchestrator | 2025-07-12 13:57:11.836042 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-07-12 13:57:11.836051 | orchestrator | Saturday 12 July 2025 13:55:45 +0000 (0:00:00.292) 0:00:21.045 ********* 2025-07-12 13:57:11.836061 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.836070 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.836080 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.836089 | orchestrator | 2025-07-12 13:57:11.836099 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:11.836109 | orchestrator | Saturday 12 July 2025 13:55:45 +0000 (0:00:00.280) 0:00:21.326 ********* 2025-07-12 13:57:11.836118 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:57:11.836128 | orchestrator | 2025-07-12 13:57:11.836137 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-07-12 13:57:11.836147 | orchestrator | Saturday 12 July 2025 13:55:46 +0000 (0:00:00.805) 0:00:22.131 ********* 2025-07-12 13:57:11.836158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:11.836190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:11.836203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:11.836220 | orchestrator | 2025-07-12 13:57:11.836230 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-07-12 13:57:11.836240 | orchestrator | Saturday 12 July 2025 13:55:48 +0000 (0:00:01.580) 0:00:23.712 ********* 2025-07-12 13:57:11.836264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:11.836311 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.836349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:11.836362 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.836373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:11.836391 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.836401 | orchestrator | 2025-07-12 13:57:11.836411 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-07-12 13:57:11.836420 | orchestrator | Saturday 12 July 2025 13:55:48 +0000 (0:00:00.714) 0:00:24.426 ********* 2025-07-12 13:57:11.836444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:11.836456 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.836466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:11.836483 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.836506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 13:57:11.836517 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.836527 | orchestrator | 2025-07-12 13:57:11.836537 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-07-12 13:57:11.836547 | orchestrator | Saturday 12 July 2025 13:55:49 +0000 (0:00:01.069) 0:00:25.496 ********* 2025-07-12 13:57:11.836557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:11.836587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:11.836605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 13:57:11.836617 | orchestrator | 2025-07-12 13:57:11.836627 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:11.836636 | orchestrator | Saturday 12 July 2025 13:55:51 +0000 (0:00:01.452) 0:00:26.948 ********* 2025-07-12 13:57:11.836646 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:57:11.836656 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:57:11.836666 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:57:11.836676 | orchestrator | 2025-07-12 13:57:11.836686 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 13:57:11.836695 | orchestrator | Saturday 12 July 2025 13:55:51 +0000 (0:00:00.302) 0:00:27.251 ********* 2025-07-12 13:57:11.836705 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:57:11.836715 | orchestrator | 2025-07-12 13:57:11.836725 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-07-12 13:57:11.836740 | orchestrator | Saturday 12 July 2025 13:55:52 +0000 (0:00:00.700) 0:00:27.951 ********* 2025-07-12 13:57:11.836755 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:11.836765 | orchestrator | 2025-07-12 13:57:11.836775 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-07-12 13:57:11.836785 | orchestrator | Saturday 12 July 2025 13:55:54 +0000 (0:00:02.093) 0:00:30.044 ********* 2025-07-12 13:57:11.836795 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:11.836804 | orchestrator | 2025-07-12 13:57:11.836814 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-07-12 13:57:11.836824 | orchestrator | Saturday 12 July 2025 13:55:56 +0000 (0:00:02.056) 0:00:32.101 ********* 2025-07-12 13:57:11.836840 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:11.836849 | orchestrator | 2025-07-12 13:57:11.836859 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 13:57:11.836869 | orchestrator | Saturday 12 July 2025 13:56:11 +0000 (0:00:15.550) 0:00:47.652 ********* 2025-07-12 13:57:11.836879 | orchestrator | 2025-07-12 13:57:11.836889 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 13:57:11.836898 | orchestrator | Saturday 12 July 2025 13:56:12 +0000 (0:00:00.083) 0:00:47.735 ********* 2025-07-12 13:57:11.836908 | orchestrator | 2025-07-12 13:57:11.836918 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 13:57:11.836928 | orchestrator | Saturday 12 July 2025 13:56:12 +0000 (0:00:00.069) 0:00:47.805 ********* 2025-07-12 13:57:11.836937 | orchestrator | 2025-07-12 13:57:11.836947 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-07-12 13:57:11.836957 | orchestrator | Saturday 12 July 2025 13:56:12 +0000 (0:00:00.066) 0:00:47.871 ********* 2025-07-12 13:57:11.836967 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:57:11.836976 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:57:11.836986 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:57:11.836996 | orchestrator | 2025-07-12 13:57:11.837006 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:57:11.837016 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-07-12 13:57:11.837026 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 13:57:11.837036 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 13:57:11.837046 | orchestrator | 2025-07-12 13:57:11.837055 | orchestrator | 2025-07-12 13:57:11.837065 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:57:11.837075 | orchestrator | Saturday 12 July 2025 13:57:10 +0000 (0:00:58.271) 0:01:46.142 ********* 2025-07-12 13:57:11.837085 | orchestrator | =============================================================================== 2025-07-12 13:57:11.837094 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.27s 2025-07-12 13:57:11.837104 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.55s 2025-07-12 13:57:11.837114 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.34s 2025-07-12 13:57:11.837123 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.24s 2025-07-12 13:57:11.837133 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.09s 2025-07-12 13:57:11.837143 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.06s 2025-07-12 13:57:11.837152 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.58s 2025-07-12 13:57:11.837162 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.58s 2025-07-12 13:57:11.837172 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.51s 2025-07-12 13:57:11.837182 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.45s 2025-07-12 13:57:11.837192 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.07s 2025-07-12 13:57:11.837201 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.07s 2025-07-12 13:57:11.837223 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2025-07-12 13:57:11.837233 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-07-12 13:57:11.837243 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2025-07-12 13:57:11.837253 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-07-12 13:57:11.837279 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2025-07-12 13:57:11.837290 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2025-07-12 13:57:11.837299 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2025-07-12 13:57:11.837309 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2025-07-12 13:57:11.837319 | orchestrator | 2025-07-12 13:57:11 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:11.837329 | orchestrator | 2025-07-12 13:57:11 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state STARTED 2025-07-12 13:57:11.837354 | orchestrator | 2025-07-12 13:57:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:14.885610 | orchestrator | 2025-07-12 13:57:14 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:14.887069 | orchestrator | 2025-07-12 13:57:14 | INFO  | Task 30213eff-4ecc-4169-966d-36d798ea5380 is in state SUCCESS 2025-07-12 13:57:14.887577 | orchestrator | 2025-07-12 13:57:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:17.942190 | orchestrator | 2025-07-12 13:57:17 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:17.943476 | orchestrator | 2025-07-12 13:57:17 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:17.943512 | orchestrator | 2025-07-12 13:57:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:20.992874 | orchestrator | 2025-07-12 13:57:20 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:20.995104 | orchestrator | 2025-07-12 13:57:20 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:20.995137 | orchestrator | 2025-07-12 13:57:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:24.046300 | orchestrator | 2025-07-12 13:57:24 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:24.057232 | orchestrator | 2025-07-12 13:57:24 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:24.057284 | orchestrator | 2025-07-12 13:57:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:27.099091 | orchestrator | 2025-07-12 13:57:27 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:27.099845 | orchestrator | 2025-07-12 13:57:27 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:27.099979 | orchestrator | 2025-07-12 13:57:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:30.142888 | orchestrator | 2025-07-12 13:57:30 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:30.145190 | orchestrator | 2025-07-12 13:57:30 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:30.145224 | orchestrator | 2025-07-12 13:57:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:33.196684 | orchestrator | 2025-07-12 13:57:33 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:33.197551 | orchestrator | 2025-07-12 13:57:33 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:33.197581 | orchestrator | 2025-07-12 13:57:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:36.240420 | orchestrator | 2025-07-12 13:57:36 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:36.241627 | orchestrator | 2025-07-12 13:57:36 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:36.241690 | orchestrator | 2025-07-12 13:57:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:39.284832 | orchestrator | 2025-07-12 13:57:39 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:39.285504 | orchestrator | 2025-07-12 13:57:39 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:39.285535 | orchestrator | 2025-07-12 13:57:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:42.332879 | orchestrator | 2025-07-12 13:57:42 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:42.334517 | orchestrator | 2025-07-12 13:57:42 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:42.334552 | orchestrator | 2025-07-12 13:57:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:45.378508 | orchestrator | 2025-07-12 13:57:45 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:45.380076 | orchestrator | 2025-07-12 13:57:45 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:45.380092 | orchestrator | 2025-07-12 13:57:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:48.429486 | orchestrator | 2025-07-12 13:57:48 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:48.434004 | orchestrator | 2025-07-12 13:57:48 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:48.434115 | orchestrator | 2025-07-12 13:57:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:51.480704 | orchestrator | 2025-07-12 13:57:51 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:51.482799 | orchestrator | 2025-07-12 13:57:51 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:51.483030 | orchestrator | 2025-07-12 13:57:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:54.534764 | orchestrator | 2025-07-12 13:57:54 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:54.537006 | orchestrator | 2025-07-12 13:57:54 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:54.537093 | orchestrator | 2025-07-12 13:57:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:57:57.581707 | orchestrator | 2025-07-12 13:57:57 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:57:57.583055 | orchestrator | 2025-07-12 13:57:57 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:57:57.583082 | orchestrator | 2025-07-12 13:57:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:00.622793 | orchestrator | 2025-07-12 13:58:00 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:58:00.624737 | orchestrator | 2025-07-12 13:58:00 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:58:00.625660 | orchestrator | 2025-07-12 13:58:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:03.677491 | orchestrator | 2025-07-12 13:58:03 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state STARTED 2025-07-12 13:58:03.680477 | orchestrator | 2025-07-12 13:58:03 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:58:03.680561 | orchestrator | 2025-07-12 13:58:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:06.739445 | orchestrator | 2025-07-12 13:58:06 | INFO  | Task 8f91ed68-f685-414d-a89f-45868fe4661a is in state SUCCESS 2025-07-12 13:58:06.744836 | orchestrator | 2025-07-12 13:58:06.744888 | orchestrator | 2025-07-12 13:58:06.744902 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-07-12 13:58:06.744914 | orchestrator | 2025-07-12 13:58:06.744925 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-07-12 13:58:06.744937 | orchestrator | Saturday 12 July 2025 13:56:47 +0000 (0:00:00.163) 0:00:00.163 ********* 2025-07-12 13:58:06.744948 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-07-12 13:58:06.744960 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 13:58:06.744971 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 13:58:06.744982 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 13:58:06.744993 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 13:58:06.745004 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-07-12 13:58:06.745014 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-07-12 13:58:06.745025 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-07-12 13:58:06.745036 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-07-12 13:58:06.745046 | orchestrator | 2025-07-12 13:58:06.745057 | orchestrator | TASK [Create share directory] ************************************************** 2025-07-12 13:58:06.745069 | orchestrator | Saturday 12 July 2025 13:56:51 +0000 (0:00:04.163) 0:00:04.326 ********* 2025-07-12 13:58:06.745081 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 13:58:06.745092 | orchestrator | 2025-07-12 13:58:06.745103 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-07-12 13:58:06.745113 | orchestrator | Saturday 12 July 2025 13:56:52 +0000 (0:00:00.993) 0:00:05.320 ********* 2025-07-12 13:58:06.745124 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-07-12 13:58:06.745135 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 13:58:06.745146 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 13:58:06.745157 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 13:58:06.745168 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 13:58:06.745178 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-07-12 13:58:06.745189 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-07-12 13:58:06.745200 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-07-12 13:58:06.745211 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-07-12 13:58:06.745222 | orchestrator | 2025-07-12 13:58:06.745232 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-07-12 13:58:06.745243 | orchestrator | Saturday 12 July 2025 13:57:06 +0000 (0:00:13.685) 0:00:19.006 ********* 2025-07-12 13:58:06.745269 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-07-12 13:58:06.745281 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 13:58:06.745292 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 13:58:06.745303 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 13:58:06.745352 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 13:58:06.745382 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-07-12 13:58:06.745394 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-07-12 13:58:06.745405 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-07-12 13:58:06.745416 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-07-12 13:58:06.745427 | orchestrator | 2025-07-12 13:58:06.745438 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:58:06.745449 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:58:06.745464 | orchestrator | 2025-07-12 13:58:06.745477 | orchestrator | 2025-07-12 13:58:06.745490 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:58:06.745502 | orchestrator | Saturday 12 July 2025 13:57:13 +0000 (0:00:06.922) 0:00:25.929 ********* 2025-07-12 13:58:06.745515 | orchestrator | =============================================================================== 2025-07-12 13:58:06.745527 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.69s 2025-07-12 13:58:06.745540 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.92s 2025-07-12 13:58:06.745553 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.16s 2025-07-12 13:58:06.745565 | orchestrator | Create share directory -------------------------------------------------- 0.99s 2025-07-12 13:58:06.745576 | orchestrator | 2025-07-12 13:58:06.745587 | orchestrator | 2025-07-12 13:58:06.745598 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:58:06.745609 | orchestrator | 2025-07-12 13:58:06.745672 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:58:06.745686 | orchestrator | Saturday 12 July 2025 13:55:24 +0000 (0:00:00.258) 0:00:00.258 ********* 2025-07-12 13:58:06.745698 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:06.745709 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:06.745720 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:06.745731 | orchestrator | 2025-07-12 13:58:06.745742 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:58:06.745753 | orchestrator | Saturday 12 July 2025 13:55:24 +0000 (0:00:00.295) 0:00:00.554 ********* 2025-07-12 13:58:06.745764 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 13:58:06.745775 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 13:58:06.745786 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 13:58:06.745797 | orchestrator | 2025-07-12 13:58:06.745808 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-07-12 13:58:06.745819 | orchestrator | 2025-07-12 13:58:06.745830 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:06.745841 | orchestrator | Saturday 12 July 2025 13:55:25 +0000 (0:00:00.449) 0:00:01.003 ********* 2025-07-12 13:58:06.745852 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:58:06.745863 | orchestrator | 2025-07-12 13:58:06.745874 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-07-12 13:58:06.745884 | orchestrator | Saturday 12 July 2025 13:55:25 +0000 (0:00:00.566) 0:00:01.570 ********* 2025-07-12 13:58:06.745902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.745936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.745985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.746001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746174 | orchestrator | 2025-07-12 13:58:06.746194 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-07-12 13:58:06.746215 | orchestrator | Saturday 12 July 2025 13:55:27 +0000 (0:00:01.657) 0:00:03.228 ********* 2025-07-12 13:58:06.746234 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-07-12 13:58:06.746250 | orchestrator | 2025-07-12 13:58:06.746261 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-07-12 13:58:06.746279 | orchestrator | Saturday 12 July 2025 13:55:28 +0000 (0:00:00.931) 0:00:04.159 ********* 2025-07-12 13:58:06.746291 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:06.746302 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:06.746341 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:06.746353 | orchestrator | 2025-07-12 13:58:06.746364 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-07-12 13:58:06.746375 | orchestrator | Saturday 12 July 2025 13:55:28 +0000 (0:00:00.511) 0:00:04.671 ********* 2025-07-12 13:58:06.746387 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:58:06.746398 | orchestrator | 2025-07-12 13:58:06.746409 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:06.746420 | orchestrator | Saturday 12 July 2025 13:55:29 +0000 (0:00:00.758) 0:00:05.430 ********* 2025-07-12 13:58:06.746431 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:58:06.746442 | orchestrator | 2025-07-12 13:58:06.746453 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-07-12 13:58:06.746464 | orchestrator | Saturday 12 July 2025 13:55:30 +0000 (0:00:00.528) 0:00:05.958 ********* 2025-07-12 13:58:06.746477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.746505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.746518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.746542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.746625 | orchestrator | 2025-07-12 13:58:06.746636 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-07-12 13:58:06.746647 | orchestrator | Saturday 12 July 2025 13:55:33 +0000 (0:00:03.628) 0:00:09.587 ********* 2025-07-12 13:58:06.746666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:06.746685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.746697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:06.746709 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.746725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:06.746738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.746750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:06.746761 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.746780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:06.746798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.746810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:06.746821 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.746832 | orchestrator | 2025-07-12 13:58:06.746843 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-07-12 13:58:06.746855 | orchestrator | Saturday 12 July 2025 13:55:34 +0000 (0:00:00.661) 0:00:10.249 ********* 2025-07-12 13:58:06.746877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:06.746890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.746908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:06.746926 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.746938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:06.746950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.746967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:06.746979 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.746990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 13:58:06.747009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.747029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 13:58:06.747041 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.747052 | orchestrator | 2025-07-12 13:58:06.747063 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-07-12 13:58:06.747074 | orchestrator | Saturday 12 July 2025 13:55:35 +0000 (0:00:00.866) 0:00:11.115 ********* 2025-07-12 13:58:06.747086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.747103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.747122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.747142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747222 | orchestrator | 2025-07-12 13:58:06.747234 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-07-12 13:58:06.747245 | orchestrator | Saturday 12 July 2025 13:55:38 +0000 (0:00:03.712) 0:00:14.828 ********* 2025-07-12 13:58:06.747263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.747276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.747289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.747378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.747402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.747423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.747435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747469 | orchestrator | 2025-07-12 13:58:06.747486 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-07-12 13:58:06.747498 | orchestrator | Saturday 12 July 2025 13:55:44 +0000 (0:00:05.364) 0:00:20.192 ********* 2025-07-12 13:58:06.747509 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:06.747520 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:58:06.747531 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:58:06.747542 | orchestrator | 2025-07-12 13:58:06.747553 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-07-12 13:58:06.747564 | orchestrator | Saturday 12 July 2025 13:55:45 +0000 (0:00:01.378) 0:00:21.571 ********* 2025-07-12 13:58:06.747581 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.747592 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.747603 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.747613 | orchestrator | 2025-07-12 13:58:06.747624 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-07-12 13:58:06.747635 | orchestrator | Saturday 12 July 2025 13:55:46 +0000 (0:00:00.510) 0:00:22.081 ********* 2025-07-12 13:58:06.747646 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.747657 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.747668 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.747678 | orchestrator | 2025-07-12 13:58:06.747689 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-07-12 13:58:06.747700 | orchestrator | Saturday 12 July 2025 13:55:46 +0000 (0:00:00.528) 0:00:22.610 ********* 2025-07-12 13:58:06.747711 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.747722 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.747733 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.747743 | orchestrator | 2025-07-12 13:58:06.747754 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-07-12 13:58:06.747765 | orchestrator | Saturday 12 July 2025 13:55:47 +0000 (0:00:00.380) 0:00:22.990 ********* 2025-07-12 13:58:06.747784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.747797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.747809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.747832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.747845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.747864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 13:58:06.747876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.747913 | orchestrator | 2025-07-12 13:58:06.747923 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:06.747933 | orchestrator | Saturday 12 July 2025 13:55:49 +0000 (0:00:02.417) 0:00:25.408 ********* 2025-07-12 13:58:06.747943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.747953 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.747962 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.747972 | orchestrator | 2025-07-12 13:58:06.747989 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-07-12 13:58:06.747999 | orchestrator | Saturday 12 July 2025 13:55:49 +0000 (0:00:00.312) 0:00:25.720 ********* 2025-07-12 13:58:06.748008 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 13:58:06.748018 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 13:58:06.748028 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 13:58:06.748037 | orchestrator | 2025-07-12 13:58:06.748047 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-07-12 13:58:06.748057 | orchestrator | Saturday 12 July 2025 13:55:52 +0000 (0:00:02.265) 0:00:27.986 ********* 2025-07-12 13:58:06.748067 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:58:06.748076 | orchestrator | 2025-07-12 13:58:06.748086 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-07-12 13:58:06.748096 | orchestrator | Saturday 12 July 2025 13:55:52 +0000 (0:00:00.926) 0:00:28.913 ********* 2025-07-12 13:58:06.748105 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.748115 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.748125 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.748134 | orchestrator | 2025-07-12 13:58:06.748144 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-07-12 13:58:06.748154 | orchestrator | Saturday 12 July 2025 13:55:53 +0000 (0:00:00.498) 0:00:29.411 ********* 2025-07-12 13:58:06.748163 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 13:58:06.748173 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 13:58:06.748183 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 13:58:06.748192 | orchestrator | 2025-07-12 13:58:06.748202 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-07-12 13:58:06.748212 | orchestrator | Saturday 12 July 2025 13:55:54 +0000 (0:00:01.067) 0:00:30.478 ********* 2025-07-12 13:58:06.748221 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:06.748231 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:06.748241 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:06.748251 | orchestrator | 2025-07-12 13:58:06.748265 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-07-12 13:58:06.748275 | orchestrator | Saturday 12 July 2025 13:55:54 +0000 (0:00:00.298) 0:00:30.777 ********* 2025-07-12 13:58:06.748285 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 13:58:06.748294 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 13:58:06.748324 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 13:58:06.748335 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 13:58:06.748345 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 13:58:06.748355 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 13:58:06.748365 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 13:58:06.748381 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 13:58:06.748392 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 13:58:06.748401 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 13:58:06.748411 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 13:58:06.748421 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 13:58:06.748430 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 13:58:06.748440 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 13:58:06.748450 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 13:58:06.748459 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 13:58:06.748469 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 13:58:06.748479 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 13:58:06.748489 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 13:58:06.748499 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 13:58:06.748508 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 13:58:06.748518 | orchestrator | 2025-07-12 13:58:06.748528 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-07-12 13:58:06.748538 | orchestrator | Saturday 12 July 2025 13:56:03 +0000 (0:00:08.946) 0:00:39.723 ********* 2025-07-12 13:58:06.748547 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 13:58:06.748562 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 13:58:06.748572 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 13:58:06.748581 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 13:58:06.748591 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 13:58:06.748601 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 13:58:06.748610 | orchestrator | 2025-07-12 13:58:06.748620 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-07-12 13:58:06.748630 | orchestrator | Saturday 12 July 2025 13:56:06 +0000 (0:00:02.604) 0:00:42.328 ********* 2025-07-12 13:58:06.748646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.748663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.748675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 13:58:06.748690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.748701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.748711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 13:58:06.748727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.748744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.748754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 13:58:06.748764 | orchestrator | 2025-07-12 13:58:06.748774 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:06.748784 | orchestrator | Saturday 12 July 2025 13:56:08 +0000 (0:00:02.345) 0:00:44.674 ********* 2025-07-12 13:58:06.748794 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.748804 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.748814 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.748824 | orchestrator | 2025-07-12 13:58:06.748833 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-07-12 13:58:06.748843 | orchestrator | Saturday 12 July 2025 13:56:09 +0000 (0:00:00.290) 0:00:44.964 ********* 2025-07-12 13:58:06.748853 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:06.748862 | orchestrator | 2025-07-12 13:58:06.748872 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-07-12 13:58:06.748882 | orchestrator | Saturday 12 July 2025 13:56:11 +0000 (0:00:02.306) 0:00:47.271 ********* 2025-07-12 13:58:06.748891 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:06.748901 | orchestrator | 2025-07-12 13:58:06.748911 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-07-12 13:58:06.748921 | orchestrator | Saturday 12 July 2025 13:56:13 +0000 (0:00:02.621) 0:00:49.893 ********* 2025-07-12 13:58:06.748930 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:06.748940 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:06.748954 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:06.748964 | orchestrator | 2025-07-12 13:58:06.748974 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-07-12 13:58:06.748984 | orchestrator | Saturday 12 July 2025 13:56:14 +0000 (0:00:00.941) 0:00:50.835 ********* 2025-07-12 13:58:06.748994 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:06.749003 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:06.749013 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:06.749023 | orchestrator | 2025-07-12 13:58:06.749033 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-07-12 13:58:06.749042 | orchestrator | Saturday 12 July 2025 13:56:15 +0000 (0:00:00.339) 0:00:51.174 ********* 2025-07-12 13:58:06.749052 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.749068 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.749077 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.749087 | orchestrator | 2025-07-12 13:58:06.749097 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-07-12 13:58:06.749107 | orchestrator | Saturday 12 July 2025 13:56:15 +0000 (0:00:00.332) 0:00:51.507 ********* 2025-07-12 13:58:06.749116 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:06.749126 | orchestrator | 2025-07-12 13:58:06.749136 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-07-12 13:58:06.749146 | orchestrator | Saturday 12 July 2025 13:56:29 +0000 (0:00:14.029) 0:01:05.537 ********* 2025-07-12 13:58:06.749156 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:06.749165 | orchestrator | 2025-07-12 13:58:06.749175 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 13:58:06.749185 | orchestrator | Saturday 12 July 2025 13:56:39 +0000 (0:00:10.009) 0:01:15.547 ********* 2025-07-12 13:58:06.749195 | orchestrator | 2025-07-12 13:58:06.749204 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 13:58:06.749214 | orchestrator | Saturday 12 July 2025 13:56:39 +0000 (0:00:00.266) 0:01:15.813 ********* 2025-07-12 13:58:06.749224 | orchestrator | 2025-07-12 13:58:06.749233 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 13:58:06.749243 | orchestrator | Saturday 12 July 2025 13:56:39 +0000 (0:00:00.065) 0:01:15.878 ********* 2025-07-12 13:58:06.749253 | orchestrator | 2025-07-12 13:58:06.749263 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-07-12 13:58:06.749276 | orchestrator | Saturday 12 July 2025 13:56:40 +0000 (0:00:00.062) 0:01:15.941 ********* 2025-07-12 13:58:06.749286 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:06.749296 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:58:06.749326 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:58:06.749344 | orchestrator | 2025-07-12 13:58:06.749358 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-07-12 13:58:06.749368 | orchestrator | Saturday 12 July 2025 13:56:56 +0000 (0:00:16.825) 0:01:32.766 ********* 2025-07-12 13:58:06.749378 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:58:06.749388 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:06.749397 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:58:06.749407 | orchestrator | 2025-07-12 13:58:06.749417 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-07-12 13:58:06.749427 | orchestrator | Saturday 12 July 2025 13:57:07 +0000 (0:00:10.315) 0:01:43.081 ********* 2025-07-12 13:58:06.749436 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:06.749446 | orchestrator | changed: [testbed-node-1] 2025-07-12 13:58:06.749456 | orchestrator | changed: [testbed-node-2] 2025-07-12 13:58:06.749465 | orchestrator | 2025-07-12 13:58:06.749475 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:06.749485 | orchestrator | Saturday 12 July 2025 13:57:18 +0000 (0:00:11.441) 0:01:54.523 ********* 2025-07-12 13:58:06.749495 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 13:58:06.749505 | orchestrator | 2025-07-12 13:58:06.749514 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-07-12 13:58:06.749524 | orchestrator | Saturday 12 July 2025 13:57:19 +0000 (0:00:00.782) 0:01:55.306 ********* 2025-07-12 13:58:06.749534 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:06.749543 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:06.749553 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:06.749563 | orchestrator | 2025-07-12 13:58:06.749573 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-07-12 13:58:06.749582 | orchestrator | Saturday 12 July 2025 13:57:20 +0000 (0:00:00.757) 0:01:56.064 ********* 2025-07-12 13:58:06.749592 | orchestrator | changed: [testbed-node-0] 2025-07-12 13:58:06.749602 | orchestrator | 2025-07-12 13:58:06.749612 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-07-12 13:58:06.749628 | orchestrator | Saturday 12 July 2025 13:57:21 +0000 (0:00:01.849) 0:01:57.913 ********* 2025-07-12 13:58:06.749638 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-07-12 13:58:06.749647 | orchestrator | 2025-07-12 13:58:06.749657 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-07-12 13:58:06.749667 | orchestrator | Saturday 12 July 2025 13:57:32 +0000 (0:00:10.609) 0:02:08.522 ********* 2025-07-12 13:58:06.749676 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-07-12 13:58:06.749686 | orchestrator | 2025-07-12 13:58:06.749696 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-07-12 13:58:06.749706 | orchestrator | Saturday 12 July 2025 13:57:54 +0000 (0:00:21.740) 0:02:30.263 ********* 2025-07-12 13:58:06.749715 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-07-12 13:58:06.749725 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-07-12 13:58:06.749735 | orchestrator | 2025-07-12 13:58:06.749744 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-07-12 13:58:06.749754 | orchestrator | Saturday 12 July 2025 13:58:01 +0000 (0:00:06.805) 0:02:37.069 ********* 2025-07-12 13:58:06.749764 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.749773 | orchestrator | 2025-07-12 13:58:06.749788 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-07-12 13:58:06.749798 | orchestrator | Saturday 12 July 2025 13:58:01 +0000 (0:00:00.371) 0:02:37.440 ********* 2025-07-12 13:58:06.749808 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.749818 | orchestrator | 2025-07-12 13:58:06.749827 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-07-12 13:58:06.749837 | orchestrator | Saturday 12 July 2025 13:58:01 +0000 (0:00:00.134) 0:02:37.574 ********* 2025-07-12 13:58:06.749847 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.749856 | orchestrator | 2025-07-12 13:58:06.749866 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-07-12 13:58:06.749876 | orchestrator | Saturday 12 July 2025 13:58:01 +0000 (0:00:00.129) 0:02:37.704 ********* 2025-07-12 13:58:06.749885 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.749895 | orchestrator | 2025-07-12 13:58:06.749905 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-07-12 13:58:06.749915 | orchestrator | Saturday 12 July 2025 13:58:02 +0000 (0:00:00.358) 0:02:38.062 ********* 2025-07-12 13:58:06.749924 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:06.749934 | orchestrator | 2025-07-12 13:58:06.749944 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 13:58:06.749953 | orchestrator | Saturday 12 July 2025 13:58:05 +0000 (0:00:03.033) 0:02:41.096 ********* 2025-07-12 13:58:06.749963 | orchestrator | skipping: [testbed-node-0] 2025-07-12 13:58:06.749973 | orchestrator | skipping: [testbed-node-1] 2025-07-12 13:58:06.749982 | orchestrator | skipping: [testbed-node-2] 2025-07-12 13:58:06.749992 | orchestrator | 2025-07-12 13:58:06.750002 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:58:06.750013 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-07-12 13:58:06.750051 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 13:58:06.750067 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 13:58:06.750077 | orchestrator | 2025-07-12 13:58:06.750087 | orchestrator | 2025-07-12 13:58:06.750097 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:58:06.750107 | orchestrator | Saturday 12 July 2025 13:58:05 +0000 (0:00:00.573) 0:02:41.670 ********* 2025-07-12 13:58:06.750122 | orchestrator | =============================================================================== 2025-07-12 13:58:06.750132 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.74s 2025-07-12 13:58:06.750142 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 16.83s 2025-07-12 13:58:06.750151 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.03s 2025-07-12 13:58:06.750161 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.44s 2025-07-12 13:58:06.750171 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.61s 2025-07-12 13:58:06.750181 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.32s 2025-07-12 13:58:06.750190 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.01s 2025-07-12 13:58:06.750200 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.95s 2025-07-12 13:58:06.750210 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.81s 2025-07-12 13:58:06.750219 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.36s 2025-07-12 13:58:06.750229 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.71s 2025-07-12 13:58:06.750239 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.63s 2025-07-12 13:58:06.750249 | orchestrator | keystone : Creating default user role ----------------------------------- 3.03s 2025-07-12 13:58:06.750258 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.62s 2025-07-12 13:58:06.750268 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.61s 2025-07-12 13:58:06.750278 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.42s 2025-07-12 13:58:06.750287 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.35s 2025-07-12 13:58:06.750297 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.31s 2025-07-12 13:58:06.750367 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.27s 2025-07-12 13:58:06.750378 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2025-07-12 13:58:06.750388 | orchestrator | 2025-07-12 13:58:06 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state STARTED 2025-07-12 13:58:06.750398 | orchestrator | 2025-07-12 13:58:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:09.781647 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:09.781746 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:09.781761 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:09.782223 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task 522e4b50-290e-449a-8c36-24deb330e7a6 is in state SUCCESS 2025-07-12 13:58:09.782864 | orchestrator | 2025-07-12 13:58:09 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:09.782891 | orchestrator | 2025-07-12 13:58:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:12.835582 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:12.835880 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:12.836527 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:12.837337 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:12.838450 | orchestrator | 2025-07-12 13:58:12 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:12.838503 | orchestrator | 2025-07-12 13:58:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:15.872800 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:15.872916 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:15.873759 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:15.874133 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:15.875022 | orchestrator | 2025-07-12 13:58:15 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:15.875045 | orchestrator | 2025-07-12 13:58:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:18.927550 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:18.927650 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:18.928158 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:18.929517 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:18.929726 | orchestrator | 2025-07-12 13:58:18 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:18.929747 | orchestrator | 2025-07-12 13:58:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:21.980370 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:21.981155 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:21.984876 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:21.987603 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:21.989422 | orchestrator | 2025-07-12 13:58:21 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:21.990006 | orchestrator | 2025-07-12 13:58:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:25.034392 | orchestrator | 2025-07-12 13:58:25 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:25.035831 | orchestrator | 2025-07-12 13:58:25 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:25.037142 | orchestrator | 2025-07-12 13:58:25 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:25.038702 | orchestrator | 2025-07-12 13:58:25 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:25.040257 | orchestrator | 2025-07-12 13:58:25 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:25.040284 | orchestrator | 2025-07-12 13:58:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:28.081474 | orchestrator | 2025-07-12 13:58:28 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:28.083527 | orchestrator | 2025-07-12 13:58:28 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:28.086126 | orchestrator | 2025-07-12 13:58:28 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:28.087891 | orchestrator | 2025-07-12 13:58:28 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:28.089793 | orchestrator | 2025-07-12 13:58:28 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:28.090063 | orchestrator | 2025-07-12 13:58:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:31.133488 | orchestrator | 2025-07-12 13:58:31 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:31.135270 | orchestrator | 2025-07-12 13:58:31 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:31.137667 | orchestrator | 2025-07-12 13:58:31 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:31.140349 | orchestrator | 2025-07-12 13:58:31 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:31.142422 | orchestrator | 2025-07-12 13:58:31 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:31.142547 | orchestrator | 2025-07-12 13:58:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:34.189883 | orchestrator | 2025-07-12 13:58:34 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:34.191435 | orchestrator | 2025-07-12 13:58:34 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:34.193352 | orchestrator | 2025-07-12 13:58:34 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:34.195036 | orchestrator | 2025-07-12 13:58:34 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:34.196940 | orchestrator | 2025-07-12 13:58:34 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:34.197079 | orchestrator | 2025-07-12 13:58:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:37.242883 | orchestrator | 2025-07-12 13:58:37 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:37.243920 | orchestrator | 2025-07-12 13:58:37 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:37.244729 | orchestrator | 2025-07-12 13:58:37 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:37.245621 | orchestrator | 2025-07-12 13:58:37 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:37.246690 | orchestrator | 2025-07-12 13:58:37 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:37.246716 | orchestrator | 2025-07-12 13:58:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:40.307579 | orchestrator | 2025-07-12 13:58:40 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:40.309683 | orchestrator | 2025-07-12 13:58:40 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:40.313986 | orchestrator | 2025-07-12 13:58:40 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:40.315716 | orchestrator | 2025-07-12 13:58:40 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:40.317509 | orchestrator | 2025-07-12 13:58:40 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:40.317875 | orchestrator | 2025-07-12 13:58:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:43.370870 | orchestrator | 2025-07-12 13:58:43 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:43.373149 | orchestrator | 2025-07-12 13:58:43 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:43.375870 | orchestrator | 2025-07-12 13:58:43 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:43.377332 | orchestrator | 2025-07-12 13:58:43 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:43.383110 | orchestrator | 2025-07-12 13:58:43 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:43.383196 | orchestrator | 2025-07-12 13:58:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:46.445219 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:46.445789 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:46.447616 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:46.448760 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:46.449782 | orchestrator | 2025-07-12 13:58:46 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:46.449808 | orchestrator | 2025-07-12 13:58:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:49.482179 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:49.482333 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:49.483007 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:49.483927 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:49.484528 | orchestrator | 2025-07-12 13:58:49 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state STARTED 2025-07-12 13:58:49.484689 | orchestrator | 2025-07-12 13:58:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:52.524720 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:52.524822 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:52.524836 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:52.524848 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:52.524872 | orchestrator | 2025-07-12 13:58:52 | INFO  | Task 308e50f1-d47b-4fef-ad7c-4b9d823acc83 is in state SUCCESS 2025-07-12 13:58:52.525331 | orchestrator | 2025-07-12 13:58:52.525359 | orchestrator | 2025-07-12 13:58:52.525371 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-07-12 13:58:52.525383 | orchestrator | 2025-07-12 13:58:52.525394 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-07-12 13:58:52.525406 | orchestrator | Saturday 12 July 2025 13:57:18 +0000 (0:00:00.257) 0:00:00.257 ********* 2025-07-12 13:58:52.525418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-07-12 13:58:52.525431 | orchestrator | 2025-07-12 13:58:52.525443 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-07-12 13:58:52.525454 | orchestrator | Saturday 12 July 2025 13:57:18 +0000 (0:00:00.219) 0:00:00.477 ********* 2025-07-12 13:58:52.525493 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-07-12 13:58:52.525505 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-07-12 13:58:52.525516 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-07-12 13:58:52.525528 | orchestrator | 2025-07-12 13:58:52.525539 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-07-12 13:58:52.525550 | orchestrator | Saturday 12 July 2025 13:57:19 +0000 (0:00:01.211) 0:00:01.688 ********* 2025-07-12 13:58:52.525561 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-07-12 13:58:52.525572 | orchestrator | 2025-07-12 13:58:52.525583 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-07-12 13:58:52.525594 | orchestrator | Saturday 12 July 2025 13:57:20 +0000 (0:00:01.147) 0:00:02.836 ********* 2025-07-12 13:58:52.525605 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:52.525616 | orchestrator | 2025-07-12 13:58:52.525627 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-07-12 13:58:52.525638 | orchestrator | Saturday 12 July 2025 13:57:21 +0000 (0:00:01.013) 0:00:03.849 ********* 2025-07-12 13:58:52.525649 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:52.525660 | orchestrator | 2025-07-12 13:58:52.525671 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-07-12 13:58:52.525682 | orchestrator | Saturday 12 July 2025 13:57:22 +0000 (0:00:00.915) 0:00:04.765 ********* 2025-07-12 13:58:52.525693 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-07-12 13:58:52.525704 | orchestrator | ok: [testbed-manager] 2025-07-12 13:58:52.525715 | orchestrator | 2025-07-12 13:58:52.525726 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-07-12 13:58:52.525737 | orchestrator | Saturday 12 July 2025 13:58:00 +0000 (0:00:37.571) 0:00:42.336 ********* 2025-07-12 13:58:52.525748 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-07-12 13:58:52.525759 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-07-12 13:58:52.525770 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-07-12 13:58:52.525781 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-07-12 13:58:52.525792 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-07-12 13:58:52.525802 | orchestrator | 2025-07-12 13:58:52.525813 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-07-12 13:58:52.525824 | orchestrator | Saturday 12 July 2025 13:58:04 +0000 (0:00:04.060) 0:00:46.397 ********* 2025-07-12 13:58:52.525850 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-07-12 13:58:52.525861 | orchestrator | 2025-07-12 13:58:52.525872 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-07-12 13:58:52.525884 | orchestrator | Saturday 12 July 2025 13:58:04 +0000 (0:00:00.477) 0:00:46.875 ********* 2025-07-12 13:58:52.525896 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:58:52.525909 | orchestrator | 2025-07-12 13:58:52.525921 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-07-12 13:58:52.525933 | orchestrator | Saturday 12 July 2025 13:58:05 +0000 (0:00:00.124) 0:00:46.999 ********* 2025-07-12 13:58:52.525945 | orchestrator | skipping: [testbed-manager] 2025-07-12 13:58:52.525957 | orchestrator | 2025-07-12 13:58:52.525970 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-07-12 13:58:52.525982 | orchestrator | Saturday 12 July 2025 13:58:05 +0000 (0:00:00.274) 0:00:47.274 ********* 2025-07-12 13:58:52.525994 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:52.526006 | orchestrator | 2025-07-12 13:58:52.526074 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-07-12 13:58:52.526087 | orchestrator | Saturday 12 July 2025 13:58:06 +0000 (0:00:01.544) 0:00:48.819 ********* 2025-07-12 13:58:52.526101 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:52.526120 | orchestrator | 2025-07-12 13:58:52.526139 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-07-12 13:58:52.526169 | orchestrator | Saturday 12 July 2025 13:58:07 +0000 (0:00:00.728) 0:00:49.548 ********* 2025-07-12 13:58:52.526188 | orchestrator | changed: [testbed-manager] 2025-07-12 13:58:52.526205 | orchestrator | 2025-07-12 13:58:52.526223 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-07-12 13:58:52.526241 | orchestrator | Saturday 12 July 2025 13:58:08 +0000 (0:00:00.574) 0:00:50.123 ********* 2025-07-12 13:58:52.526260 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-07-12 13:58:52.526278 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-07-12 13:58:52.526324 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-07-12 13:58:52.526342 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-07-12 13:58:52.526360 | orchestrator | 2025-07-12 13:58:52.526377 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:58:52.526395 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 13:58:52.526414 | orchestrator | 2025-07-12 13:58:52.526432 | orchestrator | 2025-07-12 13:58:52.526470 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:58:52.526492 | orchestrator | Saturday 12 July 2025 13:58:09 +0000 (0:00:01.239) 0:00:51.362 ********* 2025-07-12 13:58:52.526512 | orchestrator | =============================================================================== 2025-07-12 13:58:52.526532 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.57s 2025-07-12 13:58:52.526550 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.06s 2025-07-12 13:58:52.526568 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.54s 2025-07-12 13:58:52.526586 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.24s 2025-07-12 13:58:52.526605 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.21s 2025-07-12 13:58:52.526622 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-07-12 13:58:52.526640 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.01s 2025-07-12 13:58:52.526658 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.92s 2025-07-12 13:58:52.526676 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2025-07-12 13:58:52.526696 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2025-07-12 13:58:52.526713 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2025-07-12 13:58:52.526731 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.27s 2025-07-12 13:58:52.526749 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-07-12 13:58:52.526768 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-07-12 13:58:52.526786 | orchestrator | 2025-07-12 13:58:52.526806 | orchestrator | 2025-07-12 13:58:52.526825 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-07-12 13:58:52.526845 | orchestrator | 2025-07-12 13:58:52.526864 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-07-12 13:58:52.526882 | orchestrator | Saturday 12 July 2025 13:58:10 +0000 (0:00:00.089) 0:00:00.089 ********* 2025-07-12 13:58:52.526900 | orchestrator | changed: [localhost] 2025-07-12 13:58:52.526919 | orchestrator | 2025-07-12 13:58:52.526939 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-07-12 13:58:52.526958 | orchestrator | Saturday 12 July 2025 13:58:12 +0000 (0:00:01.181) 0:00:01.271 ********* 2025-07-12 13:58:52.526976 | orchestrator | changed: [localhost] 2025-07-12 13:58:52.526995 | orchestrator | 2025-07-12 13:58:52.527007 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-07-12 13:58:52.527018 | orchestrator | Saturday 12 July 2025 13:58:44 +0000 (0:00:32.877) 0:00:34.148 ********* 2025-07-12 13:58:52.527043 | orchestrator | changed: [localhost] 2025-07-12 13:58:52.527054 | orchestrator | 2025-07-12 13:58:52.527065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 13:58:52.527076 | orchestrator | 2025-07-12 13:58:52.527087 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 13:58:52.527098 | orchestrator | Saturday 12 July 2025 13:58:50 +0000 (0:00:05.959) 0:00:40.108 ********* 2025-07-12 13:58:52.527109 | orchestrator | ok: [testbed-node-0] 2025-07-12 13:58:52.527120 | orchestrator | ok: [testbed-node-1] 2025-07-12 13:58:52.527131 | orchestrator | ok: [testbed-node-2] 2025-07-12 13:58:52.527142 | orchestrator | 2025-07-12 13:58:52.527153 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 13:58:52.527174 | orchestrator | Saturday 12 July 2025 13:58:51 +0000 (0:00:00.366) 0:00:40.474 ********* 2025-07-12 13:58:52.527185 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-07-12 13:58:52.527197 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-07-12 13:58:52.527208 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-07-12 13:58:52.527218 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-07-12 13:58:52.527229 | orchestrator | 2025-07-12 13:58:52.527240 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-07-12 13:58:52.527251 | orchestrator | skipping: no hosts matched 2025-07-12 13:58:52.527262 | orchestrator | 2025-07-12 13:58:52.527273 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 13:58:52.527316 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:58:52.527331 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:58:52.527343 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:58:52.527355 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 13:58:52.527366 | orchestrator | 2025-07-12 13:58:52.527377 | orchestrator | 2025-07-12 13:58:52.527388 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 13:58:52.527398 | orchestrator | Saturday 12 July 2025 13:58:51 +0000 (0:00:00.666) 0:00:41.141 ********* 2025-07-12 13:58:52.527409 | orchestrator | =============================================================================== 2025-07-12 13:58:52.527420 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 32.88s 2025-07-12 13:58:52.527431 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.96s 2025-07-12 13:58:52.527442 | orchestrator | Ensure the destination directory exists --------------------------------- 1.18s 2025-07-12 13:58:52.527453 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-07-12 13:58:52.527476 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-07-12 13:58:52.527488 | orchestrator | 2025-07-12 13:58:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:55.584420 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:55.584533 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:55.584885 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:55.588005 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:55.588465 | orchestrator | 2025-07-12 13:58:55 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:58:55.588512 | orchestrator | 2025-07-12 13:58:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:58:58.623272 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:58:58.623543 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:58:58.624018 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:58:58.627552 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:58:58.627663 | orchestrator | 2025-07-12 13:58:58 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:58:58.627679 | orchestrator | 2025-07-12 13:58:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:01.655781 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:01.655873 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:01.655899 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:01.656776 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:01.656877 | orchestrator | 2025-07-12 13:59:01 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:01.656895 | orchestrator | 2025-07-12 13:59:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:04.684384 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:04.684565 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:04.685358 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:04.686962 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:04.687964 | orchestrator | 2025-07-12 13:59:04 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:04.688040 | orchestrator | 2025-07-12 13:59:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:07.710464 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:07.710825 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:07.712702 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:07.714251 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:07.715668 | orchestrator | 2025-07-12 13:59:07 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:07.715714 | orchestrator | 2025-07-12 13:59:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:10.748662 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:10.749228 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:10.749642 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:10.750342 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:10.752310 | orchestrator | 2025-07-12 13:59:10 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:10.752631 | orchestrator | 2025-07-12 13:59:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:13.780106 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:13.780215 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:13.780824 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:13.783607 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:13.783647 | orchestrator | 2025-07-12 13:59:13 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:13.783660 | orchestrator | 2025-07-12 13:59:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:16.808512 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:16.808603 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:16.809229 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:16.810662 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:16.811222 | orchestrator | 2025-07-12 13:59:16 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:16.811241 | orchestrator | 2025-07-12 13:59:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:19.834513 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:19.834714 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:19.835169 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:19.835673 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:19.836254 | orchestrator | 2025-07-12 13:59:19 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:19.836385 | orchestrator | 2025-07-12 13:59:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:22.868529 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:22.868627 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:22.869034 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:22.869548 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:22.870850 | orchestrator | 2025-07-12 13:59:22 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:22.870872 | orchestrator | 2025-07-12 13:59:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:25.899847 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:25.900701 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:25.901250 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:25.903035 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:25.903469 | orchestrator | 2025-07-12 13:59:25 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:25.903594 | orchestrator | 2025-07-12 13:59:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:28.932002 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:28.932109 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:28.934135 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:28.937040 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:28.937620 | orchestrator | 2025-07-12 13:59:28 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:28.937642 | orchestrator | 2025-07-12 13:59:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:31.968789 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:31.970541 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:31.972678 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:31.975252 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:31.977507 | orchestrator | 2025-07-12 13:59:31 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:31.977609 | orchestrator | 2025-07-12 13:59:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:34.999525 | orchestrator | 2025-07-12 13:59:34 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:34.999750 | orchestrator | 2025-07-12 13:59:34 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:35.000251 | orchestrator | 2025-07-12 13:59:34 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state STARTED 2025-07-12 13:59:35.001682 | orchestrator | 2025-07-12 13:59:34 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:35.002422 | orchestrator | 2025-07-12 13:59:35 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:35.002448 | orchestrator | 2025-07-12 13:59:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:38.030313 | orchestrator | 2025-07-12 13:59:38 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:38.030814 | orchestrator | 2025-07-12 13:59:38 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:38.031254 | orchestrator | 2025-07-12 13:59:38 | INFO  | Task 902aefb8-39bf-4037-b837-2c45c95233f4 is in state SUCCESS 2025-07-12 13:59:38.031962 | orchestrator | 2025-07-12 13:59:38 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:38.033930 | orchestrator | 2025-07-12 13:59:38 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:38.033981 | orchestrator | 2025-07-12 13:59:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:41.064524 | orchestrator | 2025-07-12 13:59:41 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:41.064632 | orchestrator | 2025-07-12 13:59:41 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:41.065347 | orchestrator | 2025-07-12 13:59:41 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:41.066010 | orchestrator | 2025-07-12 13:59:41 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:41.066100 | orchestrator | 2025-07-12 13:59:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:44.094681 | orchestrator | 2025-07-12 13:59:44 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:44.094780 | orchestrator | 2025-07-12 13:59:44 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:44.095222 | orchestrator | 2025-07-12 13:59:44 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:44.095798 | orchestrator | 2025-07-12 13:59:44 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:44.096443 | orchestrator | 2025-07-12 13:59:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:47.129207 | orchestrator | 2025-07-12 13:59:47 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:47.129501 | orchestrator | 2025-07-12 13:59:47 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:47.130004 | orchestrator | 2025-07-12 13:59:47 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:47.132083 | orchestrator | 2025-07-12 13:59:47 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:47.132121 | orchestrator | 2025-07-12 13:59:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:50.165410 | orchestrator | 2025-07-12 13:59:50 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:50.165510 | orchestrator | 2025-07-12 13:59:50 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:50.165525 | orchestrator | 2025-07-12 13:59:50 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:50.165537 | orchestrator | 2025-07-12 13:59:50 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:50.165548 | orchestrator | 2025-07-12 13:59:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:53.191333 | orchestrator | 2025-07-12 13:59:53 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:53.191721 | orchestrator | 2025-07-12 13:59:53 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:53.192504 | orchestrator | 2025-07-12 13:59:53 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:53.193535 | orchestrator | 2025-07-12 13:59:53 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:53.193560 | orchestrator | 2025-07-12 13:59:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:56.233682 | orchestrator | 2025-07-12 13:59:56 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:56.235886 | orchestrator | 2025-07-12 13:59:56 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:56.237035 | orchestrator | 2025-07-12 13:59:56 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:56.238715 | orchestrator | 2025-07-12 13:59:56 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:56.238753 | orchestrator | 2025-07-12 13:59:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 13:59:59.284724 | orchestrator | 2025-07-12 13:59:59 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 13:59:59.287270 | orchestrator | 2025-07-12 13:59:59 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 13:59:59.287946 | orchestrator | 2025-07-12 13:59:59 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 13:59:59.291318 | orchestrator | 2025-07-12 13:59:59 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 13:59:59.291342 | orchestrator | 2025-07-12 13:59:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:02.325673 | orchestrator | 2025-07-12 14:00:02 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state STARTED 2025-07-12 14:00:02.327041 | orchestrator | 2025-07-12 14:00:02 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:02.327075 | orchestrator | 2025-07-12 14:00:02 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:02.327764 | orchestrator | 2025-07-12 14:00:02 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 14:00:02.327872 | orchestrator | 2025-07-12 14:00:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:05.365129 | orchestrator | 2025-07-12 14:00:05 | INFO  | Task eddda4cb-7922-4f01-abd3-5c3f59653b0d is in state SUCCESS 2025-07-12 14:00:05.366116 | orchestrator | 2025-07-12 14:00:05.366146 | orchestrator | 2025-07-12 14:00:05.366156 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-07-12 14:00:05.366164 | orchestrator | 2025-07-12 14:00:05.366171 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-07-12 14:00:05.366179 | orchestrator | Saturday 12 July 2025 13:58:13 +0000 (0:00:00.202) 0:00:00.202 ********* 2025-07-12 14:00:05.366187 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:05.366195 | orchestrator | 2025-07-12 14:00:05.366203 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-07-12 14:00:05.366210 | orchestrator | Saturday 12 July 2025 13:58:14 +0000 (0:00:01.632) 0:00:01.834 ********* 2025-07-12 14:00:05.366300 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:05.366308 | orchestrator | 2025-07-12 14:00:05.366316 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-07-12 14:00:05.366323 | orchestrator | Saturday 12 July 2025 13:58:15 +0000 (0:00:00.904) 0:00:02.739 ********* 2025-07-12 14:00:05.366331 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:05.366338 | orchestrator | 2025-07-12 14:00:05.366345 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-07-12 14:00:05.366353 | orchestrator | Saturday 12 July 2025 13:58:16 +0000 (0:00:00.892) 0:00:03.631 ********* 2025-07-12 14:00:05.366360 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:05.366367 | orchestrator | 2025-07-12 14:00:05.366375 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-07-12 14:00:05.366382 | orchestrator | Saturday 12 July 2025 13:58:17 +0000 (0:00:01.080) 0:00:04.711 ********* 2025-07-12 14:00:05.366389 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:05.366396 | orchestrator | 2025-07-12 14:00:05.366404 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-07-12 14:00:05.366411 | orchestrator | Saturday 12 July 2025 13:58:18 +0000 (0:00:01.059) 0:00:05.770 ********* 2025-07-12 14:00:05.366419 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:05.366426 | orchestrator | 2025-07-12 14:00:05.366433 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-07-12 14:00:05.366462 | orchestrator | Saturday 12 July 2025 13:58:19 +0000 (0:00:01.091) 0:00:06.862 ********* 2025-07-12 14:00:05.366469 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:05.366476 | orchestrator | 2025-07-12 14:00:05.366484 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-07-12 14:00:05.366803 | orchestrator | Saturday 12 July 2025 13:58:21 +0000 (0:00:02.062) 0:00:08.925 ********* 2025-07-12 14:00:05.366814 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:05.366821 | orchestrator | 2025-07-12 14:00:05.366829 | orchestrator | TASK [Create admin user] ******************************************************* 2025-07-12 14:00:05.366836 | orchestrator | Saturday 12 July 2025 13:58:23 +0000 (0:00:01.172) 0:00:10.098 ********* 2025-07-12 14:00:05.366844 | orchestrator | changed: [testbed-manager] 2025-07-12 14:00:05.366851 | orchestrator | 2025-07-12 14:00:05.366858 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-07-12 14:00:05.366866 | orchestrator | Saturday 12 July 2025 13:59:13 +0000 (0:00:50.244) 0:01:00.343 ********* 2025-07-12 14:00:05.366873 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:00:05.366932 | orchestrator | 2025-07-12 14:00:05.366943 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 14:00:05.366950 | orchestrator | 2025-07-12 14:00:05.366958 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 14:00:05.366965 | orchestrator | Saturday 12 July 2025 13:59:13 +0000 (0:00:00.136) 0:01:00.479 ********* 2025-07-12 14:00:05.366973 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:05.366980 | orchestrator | 2025-07-12 14:00:05.366987 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 14:00:05.366995 | orchestrator | 2025-07-12 14:00:05.367002 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 14:00:05.367010 | orchestrator | Saturday 12 July 2025 13:59:25 +0000 (0:00:11.584) 0:01:12.064 ********* 2025-07-12 14:00:05.367017 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:05.367024 | orchestrator | 2025-07-12 14:00:05.367032 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 14:00:05.367040 | orchestrator | 2025-07-12 14:00:05.367048 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 14:00:05.367055 | orchestrator | Saturday 12 July 2025 13:59:26 +0000 (0:00:01.083) 0:01:13.148 ********* 2025-07-12 14:00:05.367062 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:05.367070 | orchestrator | 2025-07-12 14:00:05.367077 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:00:05.367086 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 14:00:05.367094 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:00:05.367115 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:00:05.367123 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:00:05.367131 | orchestrator | 2025-07-12 14:00:05.367138 | orchestrator | 2025-07-12 14:00:05.367145 | orchestrator | 2025-07-12 14:00:05.367153 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:00:05.367160 | orchestrator | Saturday 12 July 2025 13:59:37 +0000 (0:00:11.181) 0:01:24.329 ********* 2025-07-12 14:00:05.367168 | orchestrator | =============================================================================== 2025-07-12 14:00:05.367180 | orchestrator | Create admin user ------------------------------------------------------ 50.25s 2025-07-12 14:00:05.367193 | orchestrator | Restart ceph manager service ------------------------------------------- 23.85s 2025-07-12 14:00:05.367242 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.06s 2025-07-12 14:00:05.367284 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.63s 2025-07-12 14:00:05.367291 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.17s 2025-07-12 14:00:05.367298 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.09s 2025-07-12 14:00:05.367306 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.08s 2025-07-12 14:00:05.367313 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2025-07-12 14:00:05.367320 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2025-07-12 14:00:05.367327 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.89s 2025-07-12 14:00:05.367335 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-07-12 14:00:05.367342 | orchestrator | 2025-07-12 14:00:05.367349 | orchestrator | 2025-07-12 14:00:05.367356 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:00:05.367363 | orchestrator | 2025-07-12 14:00:05.367371 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:00:05.367378 | orchestrator | Saturday 12 July 2025 13:58:11 +0000 (0:00:00.468) 0:00:00.468 ********* 2025-07-12 14:00:05.367385 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:00:05.367393 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:00:05.367400 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:00:05.367407 | orchestrator | 2025-07-12 14:00:05.367415 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:00:05.367422 | orchestrator | Saturday 12 July 2025 13:58:12 +0000 (0:00:00.386) 0:00:00.854 ********* 2025-07-12 14:00:05.367429 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-07-12 14:00:05.367437 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-07-12 14:00:05.367444 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-07-12 14:00:05.367451 | orchestrator | 2025-07-12 14:00:05.367458 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-07-12 14:00:05.367466 | orchestrator | 2025-07-12 14:00:05.367473 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 14:00:05.367480 | orchestrator | Saturday 12 July 2025 13:58:12 +0000 (0:00:00.524) 0:00:01.378 ********* 2025-07-12 14:00:05.367488 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:00:05.367496 | orchestrator | 2025-07-12 14:00:05.367503 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-07-12 14:00:05.367511 | orchestrator | Saturday 12 July 2025 13:58:13 +0000 (0:00:00.497) 0:00:01.875 ********* 2025-07-12 14:00:05.367518 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-07-12 14:00:05.367525 | orchestrator | 2025-07-12 14:00:05.367532 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-07-12 14:00:05.367540 | orchestrator | Saturday 12 July 2025 13:58:17 +0000 (0:00:03.807) 0:00:05.683 ********* 2025-07-12 14:00:05.367547 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-07-12 14:00:05.367554 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-07-12 14:00:05.367562 | orchestrator | 2025-07-12 14:00:05.367569 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-07-12 14:00:05.367576 | orchestrator | Saturday 12 July 2025 13:58:23 +0000 (0:00:06.444) 0:00:12.128 ********* 2025-07-12 14:00:05.367585 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:00:05.367593 | orchestrator | 2025-07-12 14:00:05.367602 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-07-12 14:00:05.367611 | orchestrator | Saturday 12 July 2025 13:58:26 +0000 (0:00:03.196) 0:00:15.325 ********* 2025-07-12 14:00:05.367619 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:00:05.367633 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-07-12 14:00:05.367641 | orchestrator | 2025-07-12 14:00:05.367650 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-07-12 14:00:05.367658 | orchestrator | Saturday 12 July 2025 13:58:30 +0000 (0:00:03.663) 0:00:18.988 ********* 2025-07-12 14:00:05.367667 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:00:05.367676 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-07-12 14:00:05.367684 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-07-12 14:00:05.367693 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-07-12 14:00:05.367702 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-07-12 14:00:05.367710 | orchestrator | 2025-07-12 14:00:05.367718 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-07-12 14:00:05.367727 | orchestrator | Saturday 12 July 2025 13:58:45 +0000 (0:00:14.723) 0:00:33.712 ********* 2025-07-12 14:00:05.367740 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-07-12 14:00:05.367749 | orchestrator | 2025-07-12 14:00:05.367758 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-07-12 14:00:05.367766 | orchestrator | Saturday 12 July 2025 13:58:49 +0000 (0:00:03.956) 0:00:37.668 ********* 2025-07-12 14:00:05.367787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.367800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.367810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.367825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.367840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.367856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.367865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.367934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.367944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.367952 | orchestrator | 2025-07-12 14:00:05.367959 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-07-12 14:00:05.367973 | orchestrator | Saturday 12 July 2025 13:58:51 +0000 (0:00:02.154) 0:00:39.823 ********* 2025-07-12 14:00:05.367980 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-07-12 14:00:05.367988 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-07-12 14:00:05.367995 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-07-12 14:00:05.368002 | orchestrator | 2025-07-12 14:00:05.368010 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-07-12 14:00:05.368017 | orchestrator | Saturday 12 July 2025 13:58:52 +0000 (0:00:01.094) 0:00:40.917 ********* 2025-07-12 14:00:05.368024 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:05.368031 | orchestrator | 2025-07-12 14:00:05.368039 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-07-12 14:00:05.368046 | orchestrator | Saturday 12 July 2025 13:58:52 +0000 (0:00:00.278) 0:00:41.196 ********* 2025-07-12 14:00:05.368053 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:05.368061 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:05.368068 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:05.368075 | orchestrator | 2025-07-12 14:00:05.368082 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 14:00:05.368090 | orchestrator | Saturday 12 July 2025 13:58:53 +0000 (0:00:00.844) 0:00:42.040 ********* 2025-07-12 14:00:05.368097 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:00:05.368104 | orchestrator | 2025-07-12 14:00:05.368112 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-07-12 14:00:05.368119 | orchestrator | Saturday 12 July 2025 13:58:54 +0000 (0:00:00.997) 0:00:43.038 ********* 2025-07-12 14:00:05.368138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.368147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.368155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.368168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368291 | orchestrator | 2025-07-12 14:00:05.368306 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-07-12 14:00:05.368319 | orchestrator | Saturday 12 July 2025 13:58:57 +0000 (0:00:03.303) 0:00:46.342 ********* 2025-07-12 14:00:05.368327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:05.368335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368363 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:05.368377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:05.368386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368407 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:05.368414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:05.368426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368447 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:05.368455 | orchestrator | 2025-07-12 14:00:05.368462 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-07-12 14:00:05.368470 | orchestrator | Saturday 12 July 2025 13:58:59 +0000 (0:00:02.039) 0:00:48.381 ********* 2025-07-12 14:00:05.368478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:05.368490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368506 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:05.368513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:05.368529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368549 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:05.368556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:05.368564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.368609 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:05.368619 | orchestrator | 2025-07-12 14:00:05.368626 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-07-12 14:00:05.368634 | orchestrator | Saturday 12 July 2025 13:59:00 +0000 (0:00:01.208) 0:00:49.589 ********* 2025-07-12 14:00:05.368645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.368658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.368671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.368679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.368739 | orchestrator | 2025-07-12 14:00:05.368772 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-07-12 14:00:05.368782 | orchestrator | Saturday 12 July 2025 13:59:04 +0000 (0:00:03.148) 0:00:52.738 ********* 2025-07-12 14:00:05.368791 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:05.368800 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:05.368809 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:05.368818 | orchestrator | 2025-07-12 14:00:05.368826 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-07-12 14:00:05.368835 | orchestrator | Saturday 12 July 2025 13:59:06 +0000 (0:00:02.064) 0:00:54.802 ********* 2025-07-12 14:00:05.368844 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:00:05.368853 | orchestrator | 2025-07-12 14:00:05.368862 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-07-12 14:00:05.369018 | orchestrator | Saturday 12 July 2025 13:59:07 +0000 (0:00:01.170) 0:00:55.972 ********* 2025-07-12 14:00:05.369031 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:05.369041 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:05.369049 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:05.369058 | orchestrator | 2025-07-12 14:00:05.369067 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-07-12 14:00:05.369075 | orchestrator | Saturday 12 July 2025 13:59:08 +0000 (0:00:00.636) 0:00:56.609 ********* 2025-07-12 14:00:05.369085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.369177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.369219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.369229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369380 | orchestrator | 2025-07-12 14:00:05.369390 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-07-12 14:00:05.369399 | orchestrator | Saturday 12 July 2025 13:59:17 +0000 (0:00:09.025) 0:01:05.635 ********* 2025-07-12 14:00:05.369408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:05.369418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.369427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.369436 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:05.369449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:05.369471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.369481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.369491 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:05.369500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 14:00:05.369509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.369519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:00:05.369532 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:05.369541 | orchestrator | 2025-07-12 14:00:05.369550 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-07-12 14:00:05.369559 | orchestrator | Saturday 12 July 2025 13:59:18 +0000 (0:00:01.462) 0:01:07.097 ********* 2025-07-12 14:00:05.369579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.369589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.369598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:05.369622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:00:05.369682 | orchestrator | 2025-07-12 14:00:05.369693 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 14:00:05.369703 | orchestrator | Saturday 12 July 2025 13:59:21 +0000 (0:00:03.295) 0:01:10.393 ********* 2025-07-12 14:00:05.369713 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:05.369723 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:05.369733 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:05.369743 | orchestrator | 2025-07-12 14:00:05.369754 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-07-12 14:00:05.369765 | orchestrator | Saturday 12 July 2025 13:59:22 +0000 (0:00:00.554) 0:01:10.948 ********* 2025-07-12 14:00:05.369775 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:05.369790 | orchestrator | 2025-07-12 14:00:05.369800 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-07-12 14:00:05.369811 | orchestrator | Saturday 12 July 2025 13:59:24 +0000 (0:00:02.236) 0:01:13.184 ********* 2025-07-12 14:00:05.369822 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:05.369832 | orchestrator | 2025-07-12 14:00:05.369842 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-07-12 14:00:05.369852 | orchestrator | Saturday 12 July 2025 13:59:26 +0000 (0:00:01.934) 0:01:15.118 ********* 2025-07-12 14:00:05.369861 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:05.369871 | orchestrator | 2025-07-12 14:00:05.369881 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 14:00:05.369892 | orchestrator | Saturday 12 July 2025 13:59:38 +0000 (0:00:11.621) 0:01:26.739 ********* 2025-07-12 14:00:05.369902 | orchestrator | 2025-07-12 14:00:05.369912 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 14:00:05.369922 | orchestrator | Saturday 12 July 2025 13:59:38 +0000 (0:00:00.203) 0:01:26.943 ********* 2025-07-12 14:00:05.369932 | orchestrator | 2025-07-12 14:00:05.369943 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 14:00:05.369953 | orchestrator | Saturday 12 July 2025 13:59:38 +0000 (0:00:00.190) 0:01:27.134 ********* 2025-07-12 14:00:05.369963 | orchestrator | 2025-07-12 14:00:05.369973 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-07-12 14:00:05.369983 | orchestrator | Saturday 12 July 2025 13:59:38 +0000 (0:00:00.209) 0:01:27.344 ********* 2025-07-12 14:00:05.369993 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:05.370004 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:05.370014 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:05.370075 | orchestrator | 2025-07-12 14:00:05.370084 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-07-12 14:00:05.370092 | orchestrator | Saturday 12 July 2025 13:59:46 +0000 (0:00:07.470) 0:01:34.815 ********* 2025-07-12 14:00:05.370106 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:05.370115 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:05.370124 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:05.370133 | orchestrator | 2025-07-12 14:00:05.370142 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-07-12 14:00:05.370151 | orchestrator | Saturday 12 July 2025 13:59:56 +0000 (0:00:10.249) 0:01:45.067 ********* 2025-07-12 14:00:05.370159 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:05.370168 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:05.370177 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:05.370186 | orchestrator | 2025-07-12 14:00:05.370195 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:00:05.370204 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:00:05.370219 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:00:05.370228 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:00:05.370237 | orchestrator | 2025-07-12 14:00:05.370264 | orchestrator | 2025-07-12 14:00:05.370274 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:00:05.370283 | orchestrator | Saturday 12 July 2025 14:00:03 +0000 (0:00:06.733) 0:01:51.800 ********* 2025-07-12 14:00:05.370292 | orchestrator | =============================================================================== 2025-07-12 14:00:05.370301 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.72s 2025-07-12 14:00:05.370310 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.62s 2025-07-12 14:00:05.370318 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.25s 2025-07-12 14:00:05.370333 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.03s 2025-07-12 14:00:05.370342 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.47s 2025-07-12 14:00:05.370351 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.74s 2025-07-12 14:00:05.370360 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.44s 2025-07-12 14:00:05.370369 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.96s 2025-07-12 14:00:05.370378 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.81s 2025-07-12 14:00:05.370386 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.66s 2025-07-12 14:00:05.370395 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.30s 2025-07-12 14:00:05.370404 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.30s 2025-07-12 14:00:05.370413 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.20s 2025-07-12 14:00:05.370421 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.15s 2025-07-12 14:00:05.370430 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.24s 2025-07-12 14:00:05.370439 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.15s 2025-07-12 14:00:05.370448 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.06s 2025-07-12 14:00:05.370456 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.04s 2025-07-12 14:00:05.370466 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 1.93s 2025-07-12 14:00:05.370474 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.46s 2025-07-12 14:00:05.370483 | orchestrator | 2025-07-12 14:00:05 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:05.370492 | orchestrator | 2025-07-12 14:00:05 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:05.370501 | orchestrator | 2025-07-12 14:00:05 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:05.370510 | orchestrator | 2025-07-12 14:00:05 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 14:00:05.370519 | orchestrator | 2025-07-12 14:00:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:08.415665 | orchestrator | 2025-07-12 14:00:08 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:08.415897 | orchestrator | 2025-07-12 14:00:08 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:08.416763 | orchestrator | 2025-07-12 14:00:08 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:08.417320 | orchestrator | 2025-07-12 14:00:08 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 14:00:08.419697 | orchestrator | 2025-07-12 14:00:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:11.452016 | orchestrator | 2025-07-12 14:00:11 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:11.455769 | orchestrator | 2025-07-12 14:00:11 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:11.457013 | orchestrator | 2025-07-12 14:00:11 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:11.457759 | orchestrator | 2025-07-12 14:00:11 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state STARTED 2025-07-12 14:00:11.457805 | orchestrator | 2025-07-12 14:00:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:14.499038 | orchestrator | 2025-07-12 14:00:14 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:14.501225 | orchestrator | 2025-07-12 14:00:14 | INFO  | Task 6a5748b4-95c0-41c1-b2ea-bdcf2af981a8 is in state STARTED 2025-07-12 14:00:14.504510 | orchestrator | 2025-07-12 14:00:14 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:14.506365 | orchestrator | 2025-07-12 14:00:14 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:14.509646 | orchestrator | 2025-07-12 14:00:14 | INFO  | Task 2f2b0967-e62a-48a0-8b35-3bd94b3a5558 is in state SUCCESS 2025-07-12 14:00:14.509846 | orchestrator | 2025-07-12 14:00:14.511621 | orchestrator | 2025-07-12 14:00:14.511665 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:00:14.511679 | orchestrator | 2025-07-12 14:00:14.511691 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:00:14.511703 | orchestrator | Saturday 12 July 2025 13:58:59 +0000 (0:00:01.123) 0:00:01.123 ********* 2025-07-12 14:00:14.511714 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:00:14.511726 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:00:14.511737 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:00:14.511748 | orchestrator | 2025-07-12 14:00:14.511759 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:00:14.511770 | orchestrator | Saturday 12 July 2025 13:59:00 +0000 (0:00:00.810) 0:00:01.934 ********* 2025-07-12 14:00:14.511782 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-07-12 14:00:14.511794 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-07-12 14:00:14.511804 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-07-12 14:00:14.511815 | orchestrator | 2025-07-12 14:00:14.511827 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-07-12 14:00:14.511838 | orchestrator | 2025-07-12 14:00:14.511848 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 14:00:14.511860 | orchestrator | Saturday 12 July 2025 13:59:00 +0000 (0:00:00.516) 0:00:02.451 ********* 2025-07-12 14:00:14.511871 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:00:14.511882 | orchestrator | 2025-07-12 14:00:14.511917 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-07-12 14:00:14.511930 | orchestrator | Saturday 12 July 2025 13:59:01 +0000 (0:00:00.881) 0:00:03.332 ********* 2025-07-12 14:00:14.511941 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-07-12 14:00:14.511952 | orchestrator | 2025-07-12 14:00:14.511963 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-07-12 14:00:14.511974 | orchestrator | Saturday 12 July 2025 13:59:04 +0000 (0:00:03.429) 0:00:06.763 ********* 2025-07-12 14:00:14.511985 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-07-12 14:00:14.511996 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-07-12 14:00:14.512007 | orchestrator | 2025-07-12 14:00:14.512018 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-07-12 14:00:14.512029 | orchestrator | Saturday 12 July 2025 13:59:10 +0000 (0:00:05.895) 0:00:12.659 ********* 2025-07-12 14:00:14.512040 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:00:14.512051 | orchestrator | 2025-07-12 14:00:14.512061 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-07-12 14:00:14.512072 | orchestrator | Saturday 12 July 2025 13:59:13 +0000 (0:00:03.080) 0:00:15.739 ********* 2025-07-12 14:00:14.512083 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:00:14.512094 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-07-12 14:00:14.512105 | orchestrator | 2025-07-12 14:00:14.512115 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-07-12 14:00:14.512148 | orchestrator | Saturday 12 July 2025 13:59:17 +0000 (0:00:03.780) 0:00:19.519 ********* 2025-07-12 14:00:14.512160 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:00:14.512171 | orchestrator | 2025-07-12 14:00:14.512181 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-07-12 14:00:14.512192 | orchestrator | Saturday 12 July 2025 13:59:20 +0000 (0:00:03.240) 0:00:22.759 ********* 2025-07-12 14:00:14.512204 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-07-12 14:00:14.512217 | orchestrator | 2025-07-12 14:00:14.512229 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 14:00:14.512277 | orchestrator | Saturday 12 July 2025 13:59:25 +0000 (0:00:04.150) 0:00:26.910 ********* 2025-07-12 14:00:14.512290 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:14.512302 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:14.512315 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:14.512327 | orchestrator | 2025-07-12 14:00:14.512339 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-07-12 14:00:14.512353 | orchestrator | Saturday 12 July 2025 13:59:25 +0000 (0:00:00.278) 0:00:27.189 ********* 2025-07-12 14:00:14.512383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.512416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.512432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.512453 | orchestrator | 2025-07-12 14:00:14.512466 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-07-12 14:00:14.512479 | orchestrator | Saturday 12 July 2025 13:59:26 +0000 (0:00:01.060) 0:00:28.250 ********* 2025-07-12 14:00:14.512492 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:14.512505 | orchestrator | 2025-07-12 14:00:14.512517 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-07-12 14:00:14.512530 | orchestrator | Saturday 12 July 2025 13:59:26 +0000 (0:00:00.294) 0:00:28.544 ********* 2025-07-12 14:00:14.512543 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:14.512556 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:14.512569 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:14.512580 | orchestrator | 2025-07-12 14:00:14.512591 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 14:00:14.512602 | orchestrator | Saturday 12 July 2025 13:59:27 +0000 (0:00:01.211) 0:00:29.755 ********* 2025-07-12 14:00:14.512613 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:00:14.512624 | orchestrator | 2025-07-12 14:00:14.512635 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-07-12 14:00:14.512646 | orchestrator | Saturday 12 July 2025 13:59:29 +0000 (0:00:01.236) 0:00:30.992 ********* 2025-07-12 14:00:14.512663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.512684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.512696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.512714 | orchestrator | 2025-07-12 14:00:14.512725 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-07-12 14:00:14.512736 | orchestrator | Saturday 12 July 2025 13:59:31 +0000 (0:00:01.871) 0:00:32.864 ********* 2025-07-12 14:00:14.512748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:14.512760 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:14.512777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:14.512789 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:14.512807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:14.512819 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:14.512830 | orchestrator | 2025-07-12 14:00:14.512841 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-07-12 14:00:14.512852 | orchestrator | Saturday 12 July 2025 13:59:31 +0000 (0:00:00.714) 0:00:33.578 ********* 2025-07-12 14:00:14.512863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:14.512881 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:14.512893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:14.512904 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:14.512915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:14.512927 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:14.512938 | orchestrator | 2025-07-12 14:00:14.512954 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-07-12 14:00:14.512965 | orchestrator | Saturday 12 July 2025 13:59:32 +0000 (0:00:00.602) 0:00:34.181 ********* 2025-07-12 14:00:14.512982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.512994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.513012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.513023 | orchestrator | 2025-07-12 14:00:14.513034 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-07-12 14:00:14.513046 | orchestrator | Saturday 12 July 2025 13:59:34 +0000 (0:00:01.789) 0:00:35.970 ********* 2025-07-12 14:00:14.513057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.513074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.513094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.513112 | orchestrator | 2025-07-12 14:00:14.513124 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-07-12 14:00:14.513135 | orchestrator | Saturday 12 July 2025 13:59:37 +0000 (0:00:03.521) 0:00:39.492 ********* 2025-07-12 14:00:14.513146 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 14:00:14.513157 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 14:00:14.513168 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 14:00:14.513179 | orchestrator | 2025-07-12 14:00:14.513190 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-07-12 14:00:14.513201 | orchestrator | Saturday 12 July 2025 13:59:39 +0000 (0:00:02.209) 0:00:41.702 ********* 2025-07-12 14:00:14.513213 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:14.513224 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:14.513235 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:14.513281 | orchestrator | 2025-07-12 14:00:14.513293 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-07-12 14:00:14.513304 | orchestrator | Saturday 12 July 2025 13:59:41 +0000 (0:00:01.988) 0:00:43.690 ********* 2025-07-12 14:00:14.513315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:14.513327 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:00:14.513344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:14.513357 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:00:14.513377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 14:00:14.513396 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:00:14.513407 | orchestrator | 2025-07-12 14:00:14.513418 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-07-12 14:00:14.513429 | orchestrator | Saturday 12 July 2025 13:59:42 +0000 (0:00:00.623) 0:00:44.313 ********* 2025-07-12 14:00:14.513441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.513453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.513476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 14:00:14.513488 | orchestrator | 2025-07-12 14:00:14.513499 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-07-12 14:00:14.513510 | orchestrator | Saturday 12 July 2025 13:59:43 +0000 (0:00:01.395) 0:00:45.709 ********* 2025-07-12 14:00:14.513521 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:14.513532 | orchestrator | 2025-07-12 14:00:14.513543 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-07-12 14:00:14.513560 | orchestrator | Saturday 12 July 2025 13:59:46 +0000 (0:00:02.544) 0:00:48.253 ********* 2025-07-12 14:00:14.513571 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:14.513582 | orchestrator | 2025-07-12 14:00:14.513593 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-07-12 14:00:14.513604 | orchestrator | Saturday 12 July 2025 13:59:48 +0000 (0:00:02.351) 0:00:50.605 ********* 2025-07-12 14:00:14.513615 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:14.513626 | orchestrator | 2025-07-12 14:00:14.513637 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 14:00:14.513648 | orchestrator | Saturday 12 July 2025 14:00:01 +0000 (0:00:12.324) 0:01:02.930 ********* 2025-07-12 14:00:14.513659 | orchestrator | 2025-07-12 14:00:14.513670 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 14:00:14.513681 | orchestrator | Saturday 12 July 2025 14:00:01 +0000 (0:00:00.132) 0:01:03.062 ********* 2025-07-12 14:00:14.513692 | orchestrator | 2025-07-12 14:00:14.513709 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 14:00:14.513721 | orchestrator | Saturday 12 July 2025 14:00:01 +0000 (0:00:00.134) 0:01:03.196 ********* 2025-07-12 14:00:14.513731 | orchestrator | 2025-07-12 14:00:14.513742 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-07-12 14:00:14.513753 | orchestrator | Saturday 12 July 2025 14:00:01 +0000 (0:00:00.066) 0:01:03.263 ********* 2025-07-12 14:00:14.513764 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:00:14.513775 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:00:14.513786 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:00:14.513798 | orchestrator | 2025-07-12 14:00:14.513809 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:00:14.513821 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:00:14.513833 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 14:00:14.513844 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 14:00:14.513855 | orchestrator | 2025-07-12 14:00:14.513866 | orchestrator | 2025-07-12 14:00:14.513877 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:00:14.513888 | orchestrator | Saturday 12 July 2025 14:00:10 +0000 (0:00:09.167) 0:01:12.430 ********* 2025-07-12 14:00:14.513899 | orchestrator | =============================================================================== 2025-07-12 14:00:14.513910 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.32s 2025-07-12 14:00:14.513921 | orchestrator | placement : Restart placement-api container ----------------------------- 9.17s 2025-07-12 14:00:14.513932 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 5.90s 2025-07-12 14:00:14.513942 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.15s 2025-07-12 14:00:14.513953 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.78s 2025-07-12 14:00:14.513964 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.52s 2025-07-12 14:00:14.513975 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.43s 2025-07-12 14:00:14.513986 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.24s 2025-07-12 14:00:14.513997 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.08s 2025-07-12 14:00:14.514008 | orchestrator | placement : Creating placement databases -------------------------------- 2.54s 2025-07-12 14:00:14.514106 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.35s 2025-07-12 14:00:14.514123 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.21s 2025-07-12 14:00:14.514142 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.99s 2025-07-12 14:00:14.514153 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.87s 2025-07-12 14:00:14.514164 | orchestrator | placement : Copying over config.json files for services ----------------- 1.79s 2025-07-12 14:00:14.514176 | orchestrator | placement : Check placement containers ---------------------------------- 1.40s 2025-07-12 14:00:14.514186 | orchestrator | placement : include_tasks ----------------------------------------------- 1.23s 2025-07-12 14:00:14.514198 | orchestrator | placement : Set placement policy file ----------------------------------- 1.21s 2025-07-12 14:00:14.514208 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.06s 2025-07-12 14:00:14.514220 | orchestrator | placement : include_tasks ----------------------------------------------- 0.88s 2025-07-12 14:00:14.514231 | orchestrator | 2025-07-12 14:00:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:17.551083 | orchestrator | 2025-07-12 14:00:17 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:17.551359 | orchestrator | 2025-07-12 14:00:17 | INFO  | Task 6a5748b4-95c0-41c1-b2ea-bdcf2af981a8 is in state STARTED 2025-07-12 14:00:17.551907 | orchestrator | 2025-07-12 14:00:17 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:17.552552 | orchestrator | 2025-07-12 14:00:17 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:17.552575 | orchestrator | 2025-07-12 14:00:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:20.578522 | orchestrator | 2025-07-12 14:00:20 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:20.578707 | orchestrator | 2025-07-12 14:00:20 | INFO  | Task 6a5748b4-95c0-41c1-b2ea-bdcf2af981a8 is in state STARTED 2025-07-12 14:00:20.579189 | orchestrator | 2025-07-12 14:00:20 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:20.579669 | orchestrator | 2025-07-12 14:00:20 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:20.579689 | orchestrator | 2025-07-12 14:00:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:23.611722 | orchestrator | 2025-07-12 14:00:23 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:23.611921 | orchestrator | 2025-07-12 14:00:23 | INFO  | Task 6a5748b4-95c0-41c1-b2ea-bdcf2af981a8 is in state SUCCESS 2025-07-12 14:00:23.612419 | orchestrator | 2025-07-12 14:00:23 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:23.615410 | orchestrator | 2025-07-12 14:00:23 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:23.615748 | orchestrator | 2025-07-12 14:00:23 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:23.615780 | orchestrator | 2025-07-12 14:00:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:26.638270 | orchestrator | 2025-07-12 14:00:26 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:26.638496 | orchestrator | 2025-07-12 14:00:26 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:26.639004 | orchestrator | 2025-07-12 14:00:26 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:26.639618 | orchestrator | 2025-07-12 14:00:26 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:26.639640 | orchestrator | 2025-07-12 14:00:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:29.675477 | orchestrator | 2025-07-12 14:00:29 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:29.675605 | orchestrator | 2025-07-12 14:00:29 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:29.676216 | orchestrator | 2025-07-12 14:00:29 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:29.676896 | orchestrator | 2025-07-12 14:00:29 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:29.677000 | orchestrator | 2025-07-12 14:00:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:32.698511 | orchestrator | 2025-07-12 14:00:32 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:32.698974 | orchestrator | 2025-07-12 14:00:32 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:32.699954 | orchestrator | 2025-07-12 14:00:32 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:32.700478 | orchestrator | 2025-07-12 14:00:32 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:32.700514 | orchestrator | 2025-07-12 14:00:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:35.728825 | orchestrator | 2025-07-12 14:00:35 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:35.730970 | orchestrator | 2025-07-12 14:00:35 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:35.731425 | orchestrator | 2025-07-12 14:00:35 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:35.732012 | orchestrator | 2025-07-12 14:00:35 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:35.732035 | orchestrator | 2025-07-12 14:00:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:38.768499 | orchestrator | 2025-07-12 14:00:38 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:38.768611 | orchestrator | 2025-07-12 14:00:38 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:38.769304 | orchestrator | 2025-07-12 14:00:38 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:38.769927 | orchestrator | 2025-07-12 14:00:38 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:38.769948 | orchestrator | 2025-07-12 14:00:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:41.818118 | orchestrator | 2025-07-12 14:00:41 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:41.819596 | orchestrator | 2025-07-12 14:00:41 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:41.821605 | orchestrator | 2025-07-12 14:00:41 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:41.822919 | orchestrator | 2025-07-12 14:00:41 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:41.822939 | orchestrator | 2025-07-12 14:00:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:44.877833 | orchestrator | 2025-07-12 14:00:44 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:44.880743 | orchestrator | 2025-07-12 14:00:44 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:44.882093 | orchestrator | 2025-07-12 14:00:44 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:44.884792 | orchestrator | 2025-07-12 14:00:44 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:44.884928 | orchestrator | 2025-07-12 14:00:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:47.924986 | orchestrator | 2025-07-12 14:00:47 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:47.925486 | orchestrator | 2025-07-12 14:00:47 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:47.926271 | orchestrator | 2025-07-12 14:00:47 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:47.927877 | orchestrator | 2025-07-12 14:00:47 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:47.927900 | orchestrator | 2025-07-12 14:00:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:50.966829 | orchestrator | 2025-07-12 14:00:50 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:50.966934 | orchestrator | 2025-07-12 14:00:50 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:50.967305 | orchestrator | 2025-07-12 14:00:50 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:50.967926 | orchestrator | 2025-07-12 14:00:50 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:50.967948 | orchestrator | 2025-07-12 14:00:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:54.003137 | orchestrator | 2025-07-12 14:00:53 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:54.003286 | orchestrator | 2025-07-12 14:00:54 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:54.010829 | orchestrator | 2025-07-12 14:00:54 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:54.012122 | orchestrator | 2025-07-12 14:00:54 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:54.012153 | orchestrator | 2025-07-12 14:00:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:00:57.060550 | orchestrator | 2025-07-12 14:00:57 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:00:57.060646 | orchestrator | 2025-07-12 14:00:57 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:00:57.062656 | orchestrator | 2025-07-12 14:00:57 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:00:57.064314 | orchestrator | 2025-07-12 14:00:57 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:00:57.065517 | orchestrator | 2025-07-12 14:00:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:00.101806 | orchestrator | 2025-07-12 14:01:00 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:01:00.105943 | orchestrator | 2025-07-12 14:01:00 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:00.105996 | orchestrator | 2025-07-12 14:01:00 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:00.106008 | orchestrator | 2025-07-12 14:01:00 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:00.106073 | orchestrator | 2025-07-12 14:01:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:03.132874 | orchestrator | 2025-07-12 14:01:03 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:01:03.133069 | orchestrator | 2025-07-12 14:01:03 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:03.133701 | orchestrator | 2025-07-12 14:01:03 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:03.134409 | orchestrator | 2025-07-12 14:01:03 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:03.134433 | orchestrator | 2025-07-12 14:01:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:06.165445 | orchestrator | 2025-07-12 14:01:06 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state STARTED 2025-07-12 14:01:06.165646 | orchestrator | 2025-07-12 14:01:06 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:06.166130 | orchestrator | 2025-07-12 14:01:06 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:06.166684 | orchestrator | 2025-07-12 14:01:06 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:06.166714 | orchestrator | 2025-07-12 14:01:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:09.192092 | orchestrator | 2025-07-12 14:01:09 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:09.192463 | orchestrator | 2025-07-12 14:01:09 | INFO  | Task 9826eadf-548f-4e10-b5d4-06d865c90abc is in state SUCCESS 2025-07-12 14:01:09.193324 | orchestrator | 2025-07-12 14:01:09.193355 | orchestrator | 2025-07-12 14:01:09.193368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:01:09.193380 | orchestrator | 2025-07-12 14:01:09.193391 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:01:09.193403 | orchestrator | Saturday 12 July 2025 14:00:18 +0000 (0:00:00.340) 0:00:00.340 ********* 2025-07-12 14:01:09.193415 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:01:09.193427 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:01:09.193438 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:01:09.193545 | orchestrator | 2025-07-12 14:01:09.193558 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:01:09.193620 | orchestrator | Saturday 12 July 2025 14:00:18 +0000 (0:00:00.615) 0:00:00.955 ********* 2025-07-12 14:01:09.193634 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 14:01:09.193646 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 14:01:09.194001 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 14:01:09.194133 | orchestrator | 2025-07-12 14:01:09.194602 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-07-12 14:01:09.194615 | orchestrator | 2025-07-12 14:01:09.194626 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-07-12 14:01:09.194637 | orchestrator | Saturday 12 July 2025 14:00:20 +0000 (0:00:01.361) 0:00:02.317 ********* 2025-07-12 14:01:09.194648 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:01:09.194659 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:01:09.194670 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:01:09.194681 | orchestrator | 2025-07-12 14:01:09.194693 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:01:09.194704 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:01:09.194717 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:01:09.194728 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:01:09.194739 | orchestrator | 2025-07-12 14:01:09.194750 | orchestrator | 2025-07-12 14:01:09.194761 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:01:09.194772 | orchestrator | Saturday 12 July 2025 14:00:20 +0000 (0:00:00.859) 0:00:03.176 ********* 2025-07-12 14:01:09.194807 | orchestrator | =============================================================================== 2025-07-12 14:01:09.194819 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.36s 2025-07-12 14:01:09.194830 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.86s 2025-07-12 14:01:09.194841 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2025-07-12 14:01:09.194852 | orchestrator | 2025-07-12 14:01:09.194863 | orchestrator | 2025-07-12 14:01:09.194874 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:01:09.194885 | orchestrator | 2025-07-12 14:01:09.194896 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:01:09.194907 | orchestrator | Saturday 12 July 2025 13:58:10 +0000 (0:00:00.314) 0:00:00.314 ********* 2025-07-12 14:01:09.194918 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:01:09.195205 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:01:09.195238 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:01:09.195250 | orchestrator | 2025-07-12 14:01:09.195299 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:01:09.195312 | orchestrator | Saturday 12 July 2025 13:58:11 +0000 (0:00:00.510) 0:00:00.825 ********* 2025-07-12 14:01:09.195323 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-07-12 14:01:09.195335 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-07-12 14:01:09.195346 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-07-12 14:01:09.195357 | orchestrator | 2025-07-12 14:01:09.195368 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-07-12 14:01:09.195379 | orchestrator | 2025-07-12 14:01:09.195390 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 14:01:09.195401 | orchestrator | Saturday 12 July 2025 13:58:11 +0000 (0:00:00.423) 0:00:01.249 ********* 2025-07-12 14:01:09.195413 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:01:09.195424 | orchestrator | 2025-07-12 14:01:09.195435 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-07-12 14:01:09.195446 | orchestrator | Saturday 12 July 2025 13:58:12 +0000 (0:00:00.618) 0:00:01.867 ********* 2025-07-12 14:01:09.195457 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-07-12 14:01:09.195468 | orchestrator | 2025-07-12 14:01:09.195479 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-07-12 14:01:09.195490 | orchestrator | Saturday 12 July 2025 13:58:16 +0000 (0:00:03.776) 0:00:05.644 ********* 2025-07-12 14:01:09.195502 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-07-12 14:01:09.195513 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-07-12 14:01:09.195524 | orchestrator | 2025-07-12 14:01:09.195535 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-07-12 14:01:09.195546 | orchestrator | Saturday 12 July 2025 13:58:22 +0000 (0:00:06.416) 0:00:12.060 ********* 2025-07-12 14:01:09.195557 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-07-12 14:01:09.195568 | orchestrator | 2025-07-12 14:01:09.195579 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-07-12 14:01:09.195591 | orchestrator | Saturday 12 July 2025 13:58:26 +0000 (0:00:03.375) 0:00:15.436 ********* 2025-07-12 14:01:09.195648 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:01:09.195662 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-07-12 14:01:09.195674 | orchestrator | 2025-07-12 14:01:09.195685 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-07-12 14:01:09.195696 | orchestrator | Saturday 12 July 2025 13:58:30 +0000 (0:00:04.007) 0:00:19.444 ********* 2025-07-12 14:01:09.195707 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:01:09.195718 | orchestrator | 2025-07-12 14:01:09.195740 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-07-12 14:01:09.195752 | orchestrator | Saturday 12 July 2025 13:58:32 +0000 (0:00:02.910) 0:00:22.354 ********* 2025-07-12 14:01:09.195763 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-07-12 14:01:09.195773 | orchestrator | 2025-07-12 14:01:09.195785 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-07-12 14:01:09.195796 | orchestrator | Saturday 12 July 2025 13:58:36 +0000 (0:00:03.503) 0:00:25.858 ********* 2025-07-12 14:01:09.195810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.195834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.195849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.195864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.195912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.195935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.195949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.195969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.195982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.195995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196201 | orchestrator | 2025-07-12 14:01:09.196213 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-07-12 14:01:09.196224 | orchestrator | Saturday 12 July 2025 13:58:39 +0000 (0:00:02.702) 0:00:28.560 ********* 2025-07-12 14:01:09.196235 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:09.196246 | orchestrator | 2025-07-12 14:01:09.196258 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-07-12 14:01:09.196325 | orchestrator | Saturday 12 July 2025 13:58:39 +0000 (0:00:00.150) 0:00:28.710 ********* 2025-07-12 14:01:09.196336 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:09.196347 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:09.196359 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:09.196370 | orchestrator | 2025-07-12 14:01:09.196380 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 14:01:09.196391 | orchestrator | Saturday 12 July 2025 13:58:39 +0000 (0:00:00.340) 0:00:29.051 ********* 2025-07-12 14:01:09.196402 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:01:09.196413 | orchestrator | 2025-07-12 14:01:09.196425 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-07-12 14:01:09.196435 | orchestrator | Saturday 12 July 2025 13:58:40 +0000 (0:00:00.795) 0:00:29.846 ********* 2025-07-12 14:01:09.196447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.196465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.196477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.196535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.196783 | orchestrator | 2025-07-12 14:01:09.196793 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-07-12 14:01:09.196803 | orchestrator | Saturday 12 July 2025 13:58:46 +0000 (0:00:06.267) 0:00:36.114 ********* 2025-07-12 14:01:09.196814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.196825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.196839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:09.196857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.196893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:09.196905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.196915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.196925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.196940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.196956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.196967 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:09.196977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197026 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:09.197037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.197047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:09.197057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197109 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:09.197119 | orchestrator | 2025-07-12 14:01:09.197155 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-07-12 14:01:09.197167 | orchestrator | Saturday 12 July 2025 13:58:47 +0000 (0:00:01.254) 0:00:37.368 ********* 2025-07-12 14:01:09.197178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.197188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:09.197199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.197287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:09.197320 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:09.197331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197383 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:09.197421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.197434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:09.197444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.197499 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:09.197509 | orchestrator | 2025-07-12 14:01:09.197519 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-07-12 14:01:09.197529 | orchestrator | Saturday 12 July 2025 13:58:49 +0000 (0:00:01.578) 0:00:38.946 ********* 2025-07-12 14:01:09.197564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.197576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.197599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.197610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197843 | orchestrator | 2025-07-12 14:01:09.197853 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-07-12 14:01:09.197863 | orchestrator | Saturday 12 July 2025 13:58:55 +0000 (0:00:06.330) 0:00:45.277 ********* 2025-07-12 14:01:09.197897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.197910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.197927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.197941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.197989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198162 | orchestrator | 2025-07-12 14:01:09.198172 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-07-12 14:01:09.198182 | orchestrator | Saturday 12 July 2025 13:59:16 +0000 (0:00:20.176) 0:01:05.453 ********* 2025-07-12 14:01:09.198192 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 14:01:09.198202 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 14:01:09.198212 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 14:01:09.198222 | orchestrator | 2025-07-12 14:01:09.198232 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-07-12 14:01:09.198242 | orchestrator | Saturday 12 July 2025 13:59:21 +0000 (0:00:05.370) 0:01:10.823 ********* 2025-07-12 14:01:09.198251 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 14:01:09.198275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 14:01:09.198286 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 14:01:09.198296 | orchestrator | 2025-07-12 14:01:09.198306 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-07-12 14:01:09.198315 | orchestrator | Saturday 12 July 2025 13:59:25 +0000 (0:00:03.944) 0:01:14.768 ********* 2025-07-12 14:01:09.198337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.198349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.198360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.198375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198563 | orchestrator | 2025-07-12 14:01:09.198573 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-07-12 14:01:09.198589 | orchestrator | Saturday 12 July 2025 13:59:28 +0000 (0:00:03.218) 0:01:17.986 ********* 2025-07-12 14:01:09.198604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.198615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.198625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.198740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.198836 | orchestrator | 2025-07-12 14:01:09.198846 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 14:01:09.198856 | orchestrator | Saturday 12 July 2025 13:59:32 +0000 (0:00:03.740) 0:01:21.726 ********* 2025-07-12 14:01:09.198866 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:09.198876 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:09.198886 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:09.198896 | orchestrator | 2025-07-12 14:01:09.198906 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-07-12 14:01:09.198916 | orchestrator | Saturday 12 July 2025 13:59:32 +0000 (0:00:00.645) 0:01:22.372 ********* 2025-07-12 14:01:09.198926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.198943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:09.198954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.198965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.198980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:09.198996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 14:01:09.199057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199084 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:09.199094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 14:01:09.199110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199131 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:09.199141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 14:01:09.199183 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:09.199193 | orchestrator | 2025-07-12 14:01:09.199203 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-07-12 14:01:09.199213 | orchestrator | Saturday 12 July 2025 13:59:34 +0000 (0:00:01.440) 0:01:23.812 ********* 2025-07-12 14:01:09.199223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.199240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.199250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 14:01:09.199274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 14:01:09.199471 | orchestrator | 2025-07-12 14:01:09.199481 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 14:01:09.199491 | orchestrator | Saturday 12 July 2025 13:59:39 +0000 (0:00:05.510) 0:01:29.322 ********* 2025-07-12 14:01:09.199506 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:01:09.199516 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:01:09.199526 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:01:09.199536 | orchestrator | 2025-07-12 14:01:09.199546 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-07-12 14:01:09.199555 | orchestrator | Saturday 12 July 2025 13:59:40 +0000 (0:00:00.735) 0:01:30.058 ********* 2025-07-12 14:01:09.199565 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-07-12 14:01:09.199575 | orchestrator | 2025-07-12 14:01:09.199585 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-07-12 14:01:09.199595 | orchestrator | Saturday 12 July 2025 13:59:43 +0000 (0:00:03.250) 0:01:33.309 ********* 2025-07-12 14:01:09.199605 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 14:01:09.199614 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-07-12 14:01:09.199624 | orchestrator | 2025-07-12 14:01:09.199634 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-07-12 14:01:09.199644 | orchestrator | Saturday 12 July 2025 13:59:46 +0000 (0:00:02.412) 0:01:35.722 ********* 2025-07-12 14:01:09.199653 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:09.199663 | orchestrator | 2025-07-12 14:01:09.199673 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 14:01:09.199683 | orchestrator | Saturday 12 July 2025 14:00:00 +0000 (0:00:14.095) 0:01:49.818 ********* 2025-07-12 14:01:09.199693 | orchestrator | 2025-07-12 14:01:09.199702 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 14:01:09.199712 | orchestrator | Saturday 12 July 2025 14:00:00 +0000 (0:00:00.153) 0:01:49.971 ********* 2025-07-12 14:01:09.199722 | orchestrator | 2025-07-12 14:01:09.199732 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 14:01:09.199742 | orchestrator | Saturday 12 July 2025 14:00:00 +0000 (0:00:00.054) 0:01:50.025 ********* 2025-07-12 14:01:09.199751 | orchestrator | 2025-07-12 14:01:09.199761 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-07-12 14:01:09.199771 | orchestrator | Saturday 12 July 2025 14:00:00 +0000 (0:00:00.102) 0:01:50.128 ********* 2025-07-12 14:01:09.199781 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:09.199791 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:09.199800 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:09.199810 | orchestrator | 2025-07-12 14:01:09.199820 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-07-12 14:01:09.199830 | orchestrator | Saturday 12 July 2025 14:00:15 +0000 (0:00:14.925) 0:02:05.053 ********* 2025-07-12 14:01:09.199845 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:09.199855 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:09.199871 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:09.199880 | orchestrator | 2025-07-12 14:01:09.199890 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-07-12 14:01:09.199900 | orchestrator | Saturday 12 July 2025 14:00:23 +0000 (0:00:07.531) 0:02:12.585 ********* 2025-07-12 14:01:09.199910 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:09.199920 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:09.199929 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:09.199939 | orchestrator | 2025-07-12 14:01:09.199949 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-07-12 14:01:09.199959 | orchestrator | Saturday 12 July 2025 14:00:29 +0000 (0:00:06.499) 0:02:19.084 ********* 2025-07-12 14:01:09.199969 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:09.199979 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:09.199989 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:09.199998 | orchestrator | 2025-07-12 14:01:09.200008 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-07-12 14:01:09.200018 | orchestrator | Saturday 12 July 2025 14:00:38 +0000 (0:00:08.572) 0:02:27.657 ********* 2025-07-12 14:01:09.200028 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:09.200038 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:09.200048 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:09.200057 | orchestrator | 2025-07-12 14:01:09.200067 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-07-12 14:01:09.200077 | orchestrator | Saturday 12 July 2025 14:00:44 +0000 (0:00:06.570) 0:02:34.228 ********* 2025-07-12 14:01:09.200087 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:09.200096 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:01:09.200106 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:01:09.200116 | orchestrator | 2025-07-12 14:01:09.200126 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-07-12 14:01:09.200136 | orchestrator | Saturday 12 July 2025 14:00:57 +0000 (0:00:12.859) 0:02:47.087 ********* 2025-07-12 14:01:09.200145 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:01:09.200155 | orchestrator | 2025-07-12 14:01:09.200165 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:01:09.200175 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:01:09.200186 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:01:09.200196 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:01:09.200206 | orchestrator | 2025-07-12 14:01:09.200215 | orchestrator | 2025-07-12 14:01:09.200225 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:01:09.200235 | orchestrator | Saturday 12 July 2025 14:01:06 +0000 (0:00:09.143) 0:02:56.231 ********* 2025-07-12 14:01:09.200245 | orchestrator | =============================================================================== 2025-07-12 14:01:09.200255 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.18s 2025-07-12 14:01:09.200320 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.93s 2025-07-12 14:01:09.200331 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.10s 2025-07-12 14:01:09.200341 | orchestrator | designate : Restart designate-worker container ------------------------- 12.86s 2025-07-12 14:01:09.200351 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 9.14s 2025-07-12 14:01:09.200361 | orchestrator | designate : Restart designate-producer container ------------------------ 8.57s 2025-07-12 14:01:09.200371 | orchestrator | designate : Restart designate-api container ----------------------------- 7.53s 2025-07-12 14:01:09.200380 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.57s 2025-07-12 14:01:09.200394 | orchestrator | designate : Restart designate-central container ------------------------- 6.50s 2025-07-12 14:01:09.200402 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.42s 2025-07-12 14:01:09.200410 | orchestrator | designate : Copying over config.json files for services ----------------- 6.33s 2025-07-12 14:01:09.200418 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.27s 2025-07-12 14:01:09.200426 | orchestrator | designate : Check designate containers ---------------------------------- 5.51s 2025-07-12 14:01:09.200434 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.37s 2025-07-12 14:01:09.200442 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.01s 2025-07-12 14:01:09.200450 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.94s 2025-07-12 14:01:09.200458 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.78s 2025-07-12 14:01:09.200466 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.74s 2025-07-12 14:01:09.200474 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.50s 2025-07-12 14:01:09.200482 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.38s 2025-07-12 14:01:09.200490 | orchestrator | 2025-07-12 14:01:09 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:09.200498 | orchestrator | 2025-07-12 14:01:09 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:09.200510 | orchestrator | 2025-07-12 14:01:09 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:09.200519 | orchestrator | 2025-07-12 14:01:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:12.222775 | orchestrator | 2025-07-12 14:01:12 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:12.222871 | orchestrator | 2025-07-12 14:01:12 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:12.224488 | orchestrator | 2025-07-12 14:01:12 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:12.224998 | orchestrator | 2025-07-12 14:01:12 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:12.225022 | orchestrator | 2025-07-12 14:01:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:15.250503 | orchestrator | 2025-07-12 14:01:15 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:15.250611 | orchestrator | 2025-07-12 14:01:15 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:15.251779 | orchestrator | 2025-07-12 14:01:15 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:15.253054 | orchestrator | 2025-07-12 14:01:15 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:15.253235 | orchestrator | 2025-07-12 14:01:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:18.312571 | orchestrator | 2025-07-12 14:01:18 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:18.314418 | orchestrator | 2025-07-12 14:01:18 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:18.317822 | orchestrator | 2025-07-12 14:01:18 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:18.319781 | orchestrator | 2025-07-12 14:01:18 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:18.319813 | orchestrator | 2025-07-12 14:01:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:21.371246 | orchestrator | 2025-07-12 14:01:21 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:21.372662 | orchestrator | 2025-07-12 14:01:21 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:21.374739 | orchestrator | 2025-07-12 14:01:21 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:21.375543 | orchestrator | 2025-07-12 14:01:21 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:21.375583 | orchestrator | 2025-07-12 14:01:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:24.414972 | orchestrator | 2025-07-12 14:01:24 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:24.416509 | orchestrator | 2025-07-12 14:01:24 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:24.417841 | orchestrator | 2025-07-12 14:01:24 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:24.418709 | orchestrator | 2025-07-12 14:01:24 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:24.419162 | orchestrator | 2025-07-12 14:01:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:27.462915 | orchestrator | 2025-07-12 14:01:27 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:27.463181 | orchestrator | 2025-07-12 14:01:27 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:27.463961 | orchestrator | 2025-07-12 14:01:27 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:27.466082 | orchestrator | 2025-07-12 14:01:27 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:27.466115 | orchestrator | 2025-07-12 14:01:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:30.502801 | orchestrator | 2025-07-12 14:01:30 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:30.503075 | orchestrator | 2025-07-12 14:01:30 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:30.503773 | orchestrator | 2025-07-12 14:01:30 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:30.504420 | orchestrator | 2025-07-12 14:01:30 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:30.504557 | orchestrator | 2025-07-12 14:01:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:33.539724 | orchestrator | 2025-07-12 14:01:33 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:33.541996 | orchestrator | 2025-07-12 14:01:33 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:33.544041 | orchestrator | 2025-07-12 14:01:33 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:33.545945 | orchestrator | 2025-07-12 14:01:33 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:33.545969 | orchestrator | 2025-07-12 14:01:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:36.600782 | orchestrator | 2025-07-12 14:01:36 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:36.602846 | orchestrator | 2025-07-12 14:01:36 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:36.603587 | orchestrator | 2025-07-12 14:01:36 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:36.609693 | orchestrator | 2025-07-12 14:01:36 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:36.609725 | orchestrator | 2025-07-12 14:01:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:39.652696 | orchestrator | 2025-07-12 14:01:39 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:39.652808 | orchestrator | 2025-07-12 14:01:39 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:39.653223 | orchestrator | 2025-07-12 14:01:39 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:39.654437 | orchestrator | 2025-07-12 14:01:39 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:39.654464 | orchestrator | 2025-07-12 14:01:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:42.715192 | orchestrator | 2025-07-12 14:01:42 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state STARTED 2025-07-12 14:01:42.715755 | orchestrator | 2025-07-12 14:01:42 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:42.719725 | orchestrator | 2025-07-12 14:01:42 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:42.720754 | orchestrator | 2025-07-12 14:01:42 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:42.720799 | orchestrator | 2025-07-12 14:01:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:45.767574 | orchestrator | 2025-07-12 14:01:45 | INFO  | Task e50cb648-08b0-48fd-80fe-9fa9f896d8dd is in state SUCCESS 2025-07-12 14:01:45.772304 | orchestrator | 2025-07-12 14:01:45 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:45.772427 | orchestrator | 2025-07-12 14:01:45 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:45.773040 | orchestrator | 2025-07-12 14:01:45 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:45.773066 | orchestrator | 2025-07-12 14:01:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:48.829954 | orchestrator | 2025-07-12 14:01:48 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:48.830108 | orchestrator | 2025-07-12 14:01:48 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:48.856846 | orchestrator | 2025-07-12 14:01:48 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:01:48.857503 | orchestrator | 2025-07-12 14:01:48 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:48.857533 | orchestrator | 2025-07-12 14:01:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:51.914276 | orchestrator | 2025-07-12 14:01:51 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:51.916691 | orchestrator | 2025-07-12 14:01:51 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:51.917464 | orchestrator | 2025-07-12 14:01:51 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:01:51.918523 | orchestrator | 2025-07-12 14:01:51 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:51.920827 | orchestrator | 2025-07-12 14:01:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:54.950670 | orchestrator | 2025-07-12 14:01:54 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:54.956885 | orchestrator | 2025-07-12 14:01:54 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:54.956963 | orchestrator | 2025-07-12 14:01:54 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:01:54.956976 | orchestrator | 2025-07-12 14:01:54 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:54.956988 | orchestrator | 2025-07-12 14:01:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:01:58.000391 | orchestrator | 2025-07-12 14:01:57 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:01:58.002759 | orchestrator | 2025-07-12 14:01:58 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:01:58.003486 | orchestrator | 2025-07-12 14:01:58 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:01:58.006473 | orchestrator | 2025-07-12 14:01:58 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:01:58.006497 | orchestrator | 2025-07-12 14:01:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:01.036696 | orchestrator | 2025-07-12 14:02:01 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:01.036805 | orchestrator | 2025-07-12 14:02:01 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state STARTED 2025-07-12 14:02:01.037102 | orchestrator | 2025-07-12 14:02:01 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:01.037675 | orchestrator | 2025-07-12 14:02:01 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:01.037700 | orchestrator | 2025-07-12 14:02:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:04.077759 | orchestrator | 2025-07-12 14:02:04 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:04.079815 | orchestrator | 2025-07-12 14:02:04 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:04.082331 | orchestrator | 2025-07-12 14:02:04 | INFO  | Task 4936a707-0a69-4e27-8cda-d816fcd747bd is in state SUCCESS 2025-07-12 14:02:04.084731 | orchestrator | 2025-07-12 14:02:04.084844 | orchestrator | 2025-07-12 14:02:04.084860 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:02:04.084872 | orchestrator | 2025-07-12 14:02:04.084883 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:02:04.084912 | orchestrator | Saturday 12 July 2025 14:01:13 +0000 (0:00:00.233) 0:00:00.233 ********* 2025-07-12 14:02:04.084924 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:02:04.084936 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:02:04.085173 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:02:04.085187 | orchestrator | ok: [testbed-manager] 2025-07-12 14:02:04.085198 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:02:04.085209 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:02:04.085221 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:02:04.085232 | orchestrator | 2025-07-12 14:02:04.085243 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:02:04.085255 | orchestrator | Saturday 12 July 2025 14:01:14 +0000 (0:00:00.891) 0:00:01.125 ********* 2025-07-12 14:02:04.085266 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:04.085278 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:04.085289 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:04.085300 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:04.085347 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:04.085359 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:04.085370 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-07-12 14:02:04.085408 | orchestrator | 2025-07-12 14:02:04.085420 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 14:02:04.085432 | orchestrator | 2025-07-12 14:02:04.085443 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-07-12 14:02:04.085454 | orchestrator | Saturday 12 July 2025 14:01:15 +0000 (0:00:00.615) 0:00:01.741 ********* 2025-07-12 14:02:04.085466 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:02:04.085479 | orchestrator | 2025-07-12 14:02:04.085490 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-07-12 14:02:04.085501 | orchestrator | Saturday 12 July 2025 14:01:16 +0000 (0:00:01.294) 0:00:03.036 ********* 2025-07-12 14:02:04.085512 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-07-12 14:02:04.085523 | orchestrator | 2025-07-12 14:02:04.085534 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-07-12 14:02:04.085544 | orchestrator | Saturday 12 July 2025 14:01:19 +0000 (0:00:03.363) 0:00:06.399 ********* 2025-07-12 14:02:04.085556 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-07-12 14:02:04.085569 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-07-12 14:02:04.085580 | orchestrator | 2025-07-12 14:02:04.085591 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-07-12 14:02:04.085602 | orchestrator | Saturday 12 July 2025 14:01:26 +0000 (0:00:06.330) 0:00:12.730 ********* 2025-07-12 14:02:04.085613 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:02:04.085624 | orchestrator | 2025-07-12 14:02:04.085635 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-07-12 14:02:04.085645 | orchestrator | Saturday 12 July 2025 14:01:29 +0000 (0:00:03.383) 0:00:16.114 ********* 2025-07-12 14:02:04.085656 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:02:04.085667 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-07-12 14:02:04.085678 | orchestrator | 2025-07-12 14:02:04.085689 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-07-12 14:02:04.085700 | orchestrator | Saturday 12 July 2025 14:01:33 +0000 (0:00:03.828) 0:00:19.943 ********* 2025-07-12 14:02:04.085711 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:02:04.085722 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-07-12 14:02:04.085733 | orchestrator | 2025-07-12 14:02:04.085744 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-07-12 14:02:04.085754 | orchestrator | Saturday 12 July 2025 14:01:39 +0000 (0:00:06.437) 0:00:26.380 ********* 2025-07-12 14:02:04.085766 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-07-12 14:02:04.085777 | orchestrator | 2025-07-12 14:02:04.085791 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:02:04.085805 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:04.085819 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:04.085832 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:04.085845 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:04.085859 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:04.085893 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:04.085906 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:02:04.085920 | orchestrator | 2025-07-12 14:02:04.085933 | orchestrator | 2025-07-12 14:02:04.085953 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:02:04.085966 | orchestrator | Saturday 12 July 2025 14:01:44 +0000 (0:00:04.663) 0:00:31.044 ********* 2025-07-12 14:02:04.085979 | orchestrator | =============================================================================== 2025-07-12 14:02:04.085992 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.44s 2025-07-12 14:02:04.086005 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.33s 2025-07-12 14:02:04.086073 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.66s 2025-07-12 14:02:04.086087 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.83s 2025-07-12 14:02:04.086100 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.38s 2025-07-12 14:02:04.086113 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.36s 2025-07-12 14:02:04.086124 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.29s 2025-07-12 14:02:04.086135 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.89s 2025-07-12 14:02:04.086161 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-07-12 14:02:04.086172 | orchestrator | 2025-07-12 14:02:04.086183 | orchestrator | 2025-07-12 14:02:04.086194 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:02:04.086205 | orchestrator | 2025-07-12 14:02:04.086216 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:02:04.086227 | orchestrator | Saturday 12 July 2025 14:00:09 +0000 (0:00:00.248) 0:00:00.248 ********* 2025-07-12 14:02:04.086238 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:02:04.086249 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:02:04.086260 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:02:04.086271 | orchestrator | 2025-07-12 14:02:04.086282 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:02:04.086293 | orchestrator | Saturday 12 July 2025 14:00:09 +0000 (0:00:00.381) 0:00:00.630 ********* 2025-07-12 14:02:04.086355 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-07-12 14:02:04.086368 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-07-12 14:02:04.086379 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-07-12 14:02:04.086390 | orchestrator | 2025-07-12 14:02:04.086401 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-07-12 14:02:04.086412 | orchestrator | 2025-07-12 14:02:04.086423 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 14:02:04.086434 | orchestrator | Saturday 12 July 2025 14:00:10 +0000 (0:00:00.819) 0:00:01.449 ********* 2025-07-12 14:02:04.086445 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:02:04.086456 | orchestrator | 2025-07-12 14:02:04.086467 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-07-12 14:02:04.086478 | orchestrator | Saturday 12 July 2025 14:00:11 +0000 (0:00:00.889) 0:00:02.339 ********* 2025-07-12 14:02:04.086489 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-07-12 14:02:04.086500 | orchestrator | 2025-07-12 14:02:04.086511 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-07-12 14:02:04.086522 | orchestrator | Saturday 12 July 2025 14:00:14 +0000 (0:00:03.291) 0:00:05.630 ********* 2025-07-12 14:02:04.086533 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-07-12 14:02:04.086553 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-07-12 14:02:04.086564 | orchestrator | 2025-07-12 14:02:04.086575 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-07-12 14:02:04.086586 | orchestrator | Saturday 12 July 2025 14:00:20 +0000 (0:00:06.176) 0:00:11.807 ********* 2025-07-12 14:02:04.086597 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:02:04.086608 | orchestrator | 2025-07-12 14:02:04.086619 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-07-12 14:02:04.086630 | orchestrator | Saturday 12 July 2025 14:00:24 +0000 (0:00:03.329) 0:00:15.137 ********* 2025-07-12 14:02:04.086641 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:02:04.086652 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-07-12 14:02:04.086663 | orchestrator | 2025-07-12 14:02:04.086674 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-07-12 14:02:04.086685 | orchestrator | Saturday 12 July 2025 14:00:28 +0000 (0:00:04.066) 0:00:19.204 ********* 2025-07-12 14:02:04.086696 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:02:04.086707 | orchestrator | 2025-07-12 14:02:04.086718 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-07-12 14:02:04.086729 | orchestrator | Saturday 12 July 2025 14:00:32 +0000 (0:00:03.847) 0:00:23.051 ********* 2025-07-12 14:02:04.086740 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-07-12 14:02:04.086751 | orchestrator | 2025-07-12 14:02:04.086762 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-07-12 14:02:04.086773 | orchestrator | Saturday 12 July 2025 14:00:36 +0000 (0:00:04.291) 0:00:27.343 ********* 2025-07-12 14:02:04.086784 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:04.086795 | orchestrator | 2025-07-12 14:02:04.086806 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-07-12 14:02:04.086827 | orchestrator | Saturday 12 July 2025 14:00:40 +0000 (0:00:03.598) 0:00:30.942 ********* 2025-07-12 14:02:04.086838 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:04.086850 | orchestrator | 2025-07-12 14:02:04.086861 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-07-12 14:02:04.086872 | orchestrator | Saturday 12 July 2025 14:00:44 +0000 (0:00:04.149) 0:00:35.092 ********* 2025-07-12 14:02:04.086888 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:04.086900 | orchestrator | 2025-07-12 14:02:04.086911 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-07-12 14:02:04.086922 | orchestrator | Saturday 12 July 2025 14:00:48 +0000 (0:00:03.759) 0:00:38.852 ********* 2025-07-12 14:02:04.086936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.086952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.086972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.086984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.087011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.087024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.087036 | orchestrator | 2025-07-12 14:02:04.087047 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-07-12 14:02:04.087058 | orchestrator | Saturday 12 July 2025 14:00:50 +0000 (0:00:02.005) 0:00:40.857 ********* 2025-07-12 14:02:04.087069 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:04.087081 | orchestrator | 2025-07-12 14:02:04.087102 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-07-12 14:02:04.087113 | orchestrator | Saturday 12 July 2025 14:00:50 +0000 (0:00:00.151) 0:00:41.009 ********* 2025-07-12 14:02:04.087124 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:04.087135 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:04.087146 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:04.087157 | orchestrator | 2025-07-12 14:02:04.087168 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-07-12 14:02:04.087179 | orchestrator | Saturday 12 July 2025 14:00:50 +0000 (0:00:00.641) 0:00:41.651 ********* 2025-07-12 14:02:04.087190 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:02:04.087201 | orchestrator | 2025-07-12 14:02:04.087212 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-07-12 14:02:04.087223 | orchestrator | Saturday 12 July 2025 14:00:51 +0000 (0:00:00.938) 0:00:42.589 ********* 2025-07-12 14:02:04.087234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.087246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.087271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.087283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.087302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.087370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.087382 | orchestrator | 2025-07-12 14:02:04.087394 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-07-12 14:02:04.087405 | orchestrator | Saturday 12 July 2025 14:00:54 +0000 (0:00:02.459) 0:00:45.049 ********* 2025-07-12 14:02:04.087416 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:02:04.087428 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:02:04.087439 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:02:04.087450 | orchestrator | 2025-07-12 14:02:04.087461 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 14:02:04.087473 | orchestrator | Saturday 12 July 2025 14:00:54 +0000 (0:00:00.307) 0:00:45.356 ********* 2025-07-12 14:02:04.087484 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:02:04.087495 | orchestrator | 2025-07-12 14:02:04.087506 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-07-12 14:02:04.087517 | orchestrator | Saturday 12 July 2025 14:00:55 +0000 (0:00:00.680) 0:00:46.037 ********* 2025-07-12 14:02:04.087537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.087550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.087569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.087582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.087784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.087819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.087832 | orchestrator | 2025-07-12 14:02:04.087843 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-07-12 14:02:04.087868 | orchestrator | Saturday 12 July 2025 14:00:57 +0000 (0:00:02.379) 0:00:48.416 ********* 2025-07-12 14:02:04.087880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:04.087892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:04.087904 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:04.087916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:04.087927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:04.087939 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:04.087962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:04.087981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:04.087993 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:04.088004 | orchestrator | 2025-07-12 14:02:04.088015 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-07-12 14:02:04.088026 | orchestrator | Saturday 12 July 2025 14:00:59 +0000 (0:00:01.862) 0:00:50.278 ********* 2025-07-12 14:02:04.088037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:04.088049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:04.088060 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:04.088078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:04.088101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:04.088113 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:04.088124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:04.088136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:04.088147 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:04.088158 | orchestrator | 2025-07-12 14:02:04.088169 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-07-12 14:02:04.088180 | orchestrator | Saturday 12 July 2025 14:01:01 +0000 (0:00:02.461) 0:00:52.740 ********* 2025-07-12 14:02:04.088192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.088216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.088234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.088246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.088258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.088270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.088281 | orchestrator | 2025-07-12 14:02:04.088292 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-07-12 14:02:04.088332 | orchestrator | Saturday 12 July 2025 14:01:05 +0000 (0:00:03.271) 0:00:56.011 ********* 2025-07-12 14:02:04.088357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.088370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.088381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.088396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.088409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.088442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.088457 | orchestrator | 2025-07-12 14:02:04.088470 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-07-12 14:02:04.088483 | orchestrator | Saturday 12 July 2025 14:01:12 +0000 (0:00:07.757) 0:01:03.769 ********* 2025-07-12 14:02:04.088498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:04.088512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:04.088526 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:04.088539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:04.088560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:04.088573 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:04.088599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 14:02:04.088614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:02:04.088627 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:04.088640 | orchestrator | 2025-07-12 14:02:04.088654 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-07-12 14:02:04.088668 | orchestrator | Saturday 12 July 2025 14:01:13 +0000 (0:00:00.779) 0:01:04.548 ********* 2025-07-12 14:02:04.088682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.088696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.088724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 14:02:04.088743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.088755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.088767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:02:04.088778 | orchestrator | 2025-07-12 14:02:04.088789 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 14:02:04.088800 | orchestrator | Saturday 12 July 2025 14:01:15 +0000 (0:00:01.985) 0:01:06.534 ********* 2025-07-12 14:02:04.088823 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:02:04.088834 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:02:04.088845 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:02:04.088856 | orchestrator | 2025-07-12 14:02:04.088867 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-07-12 14:02:04.088878 | orchestrator | Saturday 12 July 2025 14:01:15 +0000 (0:00:00.233) 0:01:06.767 ********* 2025-07-12 14:02:04.088889 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:04.088900 | orchestrator | 2025-07-12 14:02:04.088911 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-07-12 14:02:04.088922 | orchestrator | Saturday 12 July 2025 14:01:18 +0000 (0:00:02.094) 0:01:08.861 ********* 2025-07-12 14:02:04.088933 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:04.088944 | orchestrator | 2025-07-12 14:02:04.088955 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-07-12 14:02:04.088966 | orchestrator | Saturday 12 July 2025 14:01:20 +0000 (0:00:02.559) 0:01:11.421 ********* 2025-07-12 14:02:04.088977 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:04.088988 | orchestrator | 2025-07-12 14:02:04.088999 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 14:02:04.089010 | orchestrator | Saturday 12 July 2025 14:01:35 +0000 (0:00:15.251) 0:01:26.673 ********* 2025-07-12 14:02:04.089021 | orchestrator | 2025-07-12 14:02:04.089032 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 14:02:04.089043 | orchestrator | Saturday 12 July 2025 14:01:35 +0000 (0:00:00.068) 0:01:26.741 ********* 2025-07-12 14:02:04.089054 | orchestrator | 2025-07-12 14:02:04.089065 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 14:02:04.089076 | orchestrator | Saturday 12 July 2025 14:01:35 +0000 (0:00:00.064) 0:01:26.806 ********* 2025-07-12 14:02:04.089087 | orchestrator | 2025-07-12 14:02:04.089098 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-07-12 14:02:04.089109 | orchestrator | Saturday 12 July 2025 14:01:36 +0000 (0:00:00.071) 0:01:26.877 ********* 2025-07-12 14:02:04.089119 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:04.089131 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:02:04.089142 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:02:04.089153 | orchestrator | 2025-07-12 14:02:04.089164 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-07-12 14:02:04.089180 | orchestrator | Saturday 12 July 2025 14:01:51 +0000 (0:00:15.347) 0:01:42.225 ********* 2025-07-12 14:02:04.089192 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:02:04.089203 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:02:04.089214 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:02:04.089225 | orchestrator | 2025-07-12 14:02:04.089241 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:02:04.089253 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 14:02:04.089265 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 14:02:04.089276 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 14:02:04.089287 | orchestrator | 2025-07-12 14:02:04.089297 | orchestrator | 2025-07-12 14:02:04.089361 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:02:04.089373 | orchestrator | Saturday 12 July 2025 14:02:01 +0000 (0:00:10.118) 0:01:52.344 ********* 2025-07-12 14:02:04.089384 | orchestrator | =============================================================================== 2025-07-12 14:02:04.089394 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.35s 2025-07-12 14:02:04.089404 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.25s 2025-07-12 14:02:04.089420 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.12s 2025-07-12 14:02:04.089429 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.76s 2025-07-12 14:02:04.089439 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.18s 2025-07-12 14:02:04.089448 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.29s 2025-07-12 14:02:04.089458 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.15s 2025-07-12 14:02:04.089468 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.07s 2025-07-12 14:02:04.089478 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.85s 2025-07-12 14:02:04.089487 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.76s 2025-07-12 14:02:04.089497 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.60s 2025-07-12 14:02:04.089506 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.33s 2025-07-12 14:02:04.089516 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.29s 2025-07-12 14:02:04.089526 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.27s 2025-07-12 14:02:04.089536 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.56s 2025-07-12 14:02:04.089545 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.46s 2025-07-12 14:02:04.089555 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.46s 2025-07-12 14:02:04.089565 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.38s 2025-07-12 14:02:04.089574 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.09s 2025-07-12 14:02:04.089584 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.01s 2025-07-12 14:02:04.089594 | orchestrator | 2025-07-12 14:02:04 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:04.089604 | orchestrator | 2025-07-12 14:02:04 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:04.089613 | orchestrator | 2025-07-12 14:02:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:07.147404 | orchestrator | 2025-07-12 14:02:07 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:07.149613 | orchestrator | 2025-07-12 14:02:07 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:07.150662 | orchestrator | 2025-07-12 14:02:07 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:07.152342 | orchestrator | 2025-07-12 14:02:07 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:07.153489 | orchestrator | 2025-07-12 14:02:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:10.199533 | orchestrator | 2025-07-12 14:02:10 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:10.199666 | orchestrator | 2025-07-12 14:02:10 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:10.202867 | orchestrator | 2025-07-12 14:02:10 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:10.204420 | orchestrator | 2025-07-12 14:02:10 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:10.204900 | orchestrator | 2025-07-12 14:02:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:13.246922 | orchestrator | 2025-07-12 14:02:13 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:13.247942 | orchestrator | 2025-07-12 14:02:13 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:13.249162 | orchestrator | 2025-07-12 14:02:13 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:13.250540 | orchestrator | 2025-07-12 14:02:13 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:13.250574 | orchestrator | 2025-07-12 14:02:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:16.303517 | orchestrator | 2025-07-12 14:02:16 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:16.305463 | orchestrator | 2025-07-12 14:02:16 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:16.308931 | orchestrator | 2025-07-12 14:02:16 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:16.311019 | orchestrator | 2025-07-12 14:02:16 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:16.311056 | orchestrator | 2025-07-12 14:02:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:19.351825 | orchestrator | 2025-07-12 14:02:19 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:19.354083 | orchestrator | 2025-07-12 14:02:19 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:19.354650 | orchestrator | 2025-07-12 14:02:19 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:19.355477 | orchestrator | 2025-07-12 14:02:19 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:19.355501 | orchestrator | 2025-07-12 14:02:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:22.396284 | orchestrator | 2025-07-12 14:02:22 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:22.397650 | orchestrator | 2025-07-12 14:02:22 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:22.399047 | orchestrator | 2025-07-12 14:02:22 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:22.401182 | orchestrator | 2025-07-12 14:02:22 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:22.401211 | orchestrator | 2025-07-12 14:02:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:25.432491 | orchestrator | 2025-07-12 14:02:25 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:25.435014 | orchestrator | 2025-07-12 14:02:25 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:25.435920 | orchestrator | 2025-07-12 14:02:25 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:25.437022 | orchestrator | 2025-07-12 14:02:25 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:25.437047 | orchestrator | 2025-07-12 14:02:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:28.483495 | orchestrator | 2025-07-12 14:02:28 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:28.484828 | orchestrator | 2025-07-12 14:02:28 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:28.486003 | orchestrator | 2025-07-12 14:02:28 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:28.487688 | orchestrator | 2025-07-12 14:02:28 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:28.487713 | orchestrator | 2025-07-12 14:02:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:31.546242 | orchestrator | 2025-07-12 14:02:31 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:31.548000 | orchestrator | 2025-07-12 14:02:31 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:31.550649 | orchestrator | 2025-07-12 14:02:31 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:31.552932 | orchestrator | 2025-07-12 14:02:31 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:31.553304 | orchestrator | 2025-07-12 14:02:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:34.604484 | orchestrator | 2025-07-12 14:02:34 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:34.604705 | orchestrator | 2025-07-12 14:02:34 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:34.606441 | orchestrator | 2025-07-12 14:02:34 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:34.606988 | orchestrator | 2025-07-12 14:02:34 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:34.607111 | orchestrator | 2025-07-12 14:02:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:37.643850 | orchestrator | 2025-07-12 14:02:37 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:37.644447 | orchestrator | 2025-07-12 14:02:37 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:37.645169 | orchestrator | 2025-07-12 14:02:37 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:37.647106 | orchestrator | 2025-07-12 14:02:37 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:37.647142 | orchestrator | 2025-07-12 14:02:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:40.675561 | orchestrator | 2025-07-12 14:02:40 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:40.677993 | orchestrator | 2025-07-12 14:02:40 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:40.679750 | orchestrator | 2025-07-12 14:02:40 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:40.683531 | orchestrator | 2025-07-12 14:02:40 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:40.683559 | orchestrator | 2025-07-12 14:02:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:43.718177 | orchestrator | 2025-07-12 14:02:43 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:43.718295 | orchestrator | 2025-07-12 14:02:43 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:43.719900 | orchestrator | 2025-07-12 14:02:43 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:43.719926 | orchestrator | 2025-07-12 14:02:43 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:43.719938 | orchestrator | 2025-07-12 14:02:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:46.740121 | orchestrator | 2025-07-12 14:02:46 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:46.740641 | orchestrator | 2025-07-12 14:02:46 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:46.741137 | orchestrator | 2025-07-12 14:02:46 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:46.741825 | orchestrator | 2025-07-12 14:02:46 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:46.741879 | orchestrator | 2025-07-12 14:02:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:49.782667 | orchestrator | 2025-07-12 14:02:49 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:49.782777 | orchestrator | 2025-07-12 14:02:49 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:49.783614 | orchestrator | 2025-07-12 14:02:49 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:49.784196 | orchestrator | 2025-07-12 14:02:49 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:49.784217 | orchestrator | 2025-07-12 14:02:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:52.808494 | orchestrator | 2025-07-12 14:02:52 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:52.811025 | orchestrator | 2025-07-12 14:02:52 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:52.812833 | orchestrator | 2025-07-12 14:02:52 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:52.815352 | orchestrator | 2025-07-12 14:02:52 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:52.815595 | orchestrator | 2025-07-12 14:02:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:55.852250 | orchestrator | 2025-07-12 14:02:55 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:55.854925 | orchestrator | 2025-07-12 14:02:55 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:55.854957 | orchestrator | 2025-07-12 14:02:55 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:55.856584 | orchestrator | 2025-07-12 14:02:55 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:55.856609 | orchestrator | 2025-07-12 14:02:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:02:58.894578 | orchestrator | 2025-07-12 14:02:58 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:02:58.895965 | orchestrator | 2025-07-12 14:02:58 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state STARTED 2025-07-12 14:02:58.897420 | orchestrator | 2025-07-12 14:02:58 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:02:58.898586 | orchestrator | 2025-07-12 14:02:58 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:02:58.899113 | orchestrator | 2025-07-12 14:02:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:01.940065 | orchestrator | 2025-07-12 14:03:01 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:01.941454 | orchestrator | 2025-07-12 14:03:01 | INFO  | Task 5bf49f8d-85d1-4271-8918-a714a834c22d is in state SUCCESS 2025-07-12 14:03:01.942905 | orchestrator | 2025-07-12 14:03:01.943002 | orchestrator | 2025-07-12 14:03:01.943019 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:03:01.943613 | orchestrator | 2025-07-12 14:03:01.943630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:03:01.943643 | orchestrator | Saturday 12 July 2025 13:58:10 +0000 (0:00:00.350) 0:00:00.350 ********* 2025-07-12 14:03:01.943662 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:01.943680 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:01.943691 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:01.943959 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:01.943975 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:01.944010 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:01.944022 | orchestrator | 2025-07-12 14:03:01.944034 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:03:01.944045 | orchestrator | Saturday 12 July 2025 13:58:11 +0000 (0:00:00.928) 0:00:01.278 ********* 2025-07-12 14:03:01.944056 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-07-12 14:03:01.944067 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-07-12 14:03:01.944078 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-07-12 14:03:01.944090 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-07-12 14:03:01.944100 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-07-12 14:03:01.944111 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-07-12 14:03:01.944122 | orchestrator | 2025-07-12 14:03:01.944133 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-07-12 14:03:01.944144 | orchestrator | 2025-07-12 14:03:01.944155 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 14:03:01.944166 | orchestrator | Saturday 12 July 2025 13:58:12 +0000 (0:00:00.769) 0:00:02.048 ********* 2025-07-12 14:03:01.944179 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:03:01.944191 | orchestrator | 2025-07-12 14:03:01.944202 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-07-12 14:03:01.944214 | orchestrator | Saturday 12 July 2025 13:58:13 +0000 (0:00:01.087) 0:00:03.135 ********* 2025-07-12 14:03:01.944225 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:01.944235 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:01.944246 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:01.944258 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:01.944268 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:01.944279 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:01.944290 | orchestrator | 2025-07-12 14:03:01.944301 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-07-12 14:03:01.944348 | orchestrator | Saturday 12 July 2025 13:58:14 +0000 (0:00:01.003) 0:00:04.139 ********* 2025-07-12 14:03:01.944359 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:01.944370 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:01.944381 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:01.944392 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:01.944403 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:01.944413 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:01.944424 | orchestrator | 2025-07-12 14:03:01.944435 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-07-12 14:03:01.944454 | orchestrator | Saturday 12 July 2025 13:58:15 +0000 (0:00:01.013) 0:00:05.152 ********* 2025-07-12 14:03:01.944479 | orchestrator | ok: [testbed-node-0] => { 2025-07-12 14:03:01.944499 | orchestrator |  "changed": false, 2025-07-12 14:03:01.944517 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:01.944528 | orchestrator | } 2025-07-12 14:03:01.944539 | orchestrator | ok: [testbed-node-1] => { 2025-07-12 14:03:01.944550 | orchestrator |  "changed": false, 2025-07-12 14:03:01.944561 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:01.944574 | orchestrator | } 2025-07-12 14:03:01.944587 | orchestrator | ok: [testbed-node-2] => { 2025-07-12 14:03:01.944600 | orchestrator |  "changed": false, 2025-07-12 14:03:01.944613 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:01.944625 | orchestrator | } 2025-07-12 14:03:01.944638 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 14:03:01.944651 | orchestrator |  "changed": false, 2025-07-12 14:03:01.944664 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:01.944677 | orchestrator | } 2025-07-12 14:03:01.944690 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 14:03:01.944702 | orchestrator |  "changed": false, 2025-07-12 14:03:01.944715 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:01.944737 | orchestrator | } 2025-07-12 14:03:01.944750 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 14:03:01.944762 | orchestrator |  "changed": false, 2025-07-12 14:03:01.944775 | orchestrator |  "msg": "All assertions passed" 2025-07-12 14:03:01.944787 | orchestrator | } 2025-07-12 14:03:01.944800 | orchestrator | 2025-07-12 14:03:01.944828 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-07-12 14:03:01.944841 | orchestrator | Saturday 12 July 2025 13:58:16 +0000 (0:00:00.592) 0:00:05.745 ********* 2025-07-12 14:03:01.944854 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.944866 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.944879 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.944892 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.944905 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.944918 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.944931 | orchestrator | 2025-07-12 14:03:01.944942 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-07-12 14:03:01.944953 | orchestrator | Saturday 12 July 2025 13:58:16 +0000 (0:00:00.473) 0:00:06.219 ********* 2025-07-12 14:03:01.944964 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-07-12 14:03:01.944975 | orchestrator | 2025-07-12 14:03:01.944986 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-07-12 14:03:01.945075 | orchestrator | Saturday 12 July 2025 13:58:19 +0000 (0:00:03.206) 0:00:09.425 ********* 2025-07-12 14:03:01.945089 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-07-12 14:03:01.945101 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-07-12 14:03:01.945112 | orchestrator | 2025-07-12 14:03:01.945162 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-07-12 14:03:01.945175 | orchestrator | Saturday 12 July 2025 13:58:26 +0000 (0:00:06.788) 0:00:16.214 ********* 2025-07-12 14:03:01.945186 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:03:01.945197 | orchestrator | 2025-07-12 14:03:01.945208 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-07-12 14:03:01.945219 | orchestrator | Saturday 12 July 2025 13:58:29 +0000 (0:00:03.183) 0:00:19.397 ********* 2025-07-12 14:03:01.945230 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:03:01.945241 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-07-12 14:03:01.945252 | orchestrator | 2025-07-12 14:03:01.945263 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-07-12 14:03:01.945273 | orchestrator | Saturday 12 July 2025 13:58:33 +0000 (0:00:03.537) 0:00:22.935 ********* 2025-07-12 14:03:01.945284 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:03:01.945295 | orchestrator | 2025-07-12 14:03:01.945306 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-07-12 14:03:01.945393 | orchestrator | Saturday 12 July 2025 13:58:36 +0000 (0:00:03.030) 0:00:25.965 ********* 2025-07-12 14:03:01.945405 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-07-12 14:03:01.945415 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-07-12 14:03:01.945426 | orchestrator | 2025-07-12 14:03:01.945437 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 14:03:01.945448 | orchestrator | Saturday 12 July 2025 13:58:43 +0000 (0:00:07.041) 0:00:33.007 ********* 2025-07-12 14:03:01.945460 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.945479 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.945491 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.945502 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.945512 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.945521 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.945531 | orchestrator | 2025-07-12 14:03:01.945541 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-07-12 14:03:01.945560 | orchestrator | Saturday 12 July 2025 13:58:44 +0000 (0:00:00.756) 0:00:33.764 ********* 2025-07-12 14:03:01.945569 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.945579 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.945589 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.945599 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.945608 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.945618 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.945628 | orchestrator | 2025-07-12 14:03:01.945637 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-07-12 14:03:01.945647 | orchestrator | Saturday 12 July 2025 13:58:46 +0000 (0:00:02.486) 0:00:36.250 ********* 2025-07-12 14:03:01.945657 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:01.945666 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:01.945676 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:01.945685 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:01.945695 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:01.945705 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:01.945716 | orchestrator | 2025-07-12 14:03:01.945727 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 14:03:01.945739 | orchestrator | Saturday 12 July 2025 13:58:48 +0000 (0:00:01.838) 0:00:38.089 ********* 2025-07-12 14:03:01.945863 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.945876 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.945887 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.945898 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.945910 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.945921 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.945932 | orchestrator | 2025-07-12 14:03:01.945944 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-07-12 14:03:01.945955 | orchestrator | Saturday 12 July 2025 13:58:51 +0000 (0:00:03.209) 0:00:41.298 ********* 2025-07-12 14:03:01.945977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.946066 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.946085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.946105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.946116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.946132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.946143 | orchestrator | 2025-07-12 14:03:01.946153 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-07-12 14:03:01.946163 | orchestrator | Saturday 12 July 2025 13:58:54 +0000 (0:00:03.049) 0:00:44.348 ********* 2025-07-12 14:03:01.946173 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:01.946183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-07-12 14:03:01.946193 | orchestrator | due to this access issue: 2025-07-12 14:03:01.946203 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-07-12 14:03:01.946212 | orchestrator | a directory 2025-07-12 14:03:01.946222 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:03:01.946232 | orchestrator | 2025-07-12 14:03:01.946242 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 14:03:01.946281 | orchestrator | Saturday 12 July 2025 13:58:55 +0000 (0:00:00.706) 0:00:45.055 ********* 2025-07-12 14:03:01.946303 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:03:01.946340 | orchestrator | 2025-07-12 14:03:01.946351 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-07-12 14:03:01.946360 | orchestrator | Saturday 12 July 2025 13:58:56 +0000 (0:00:01.414) 0:00:46.470 ********* 2025-07-12 14:03:01.946371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.946382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.946393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.946409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.946451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.946470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.946480 | orchestrator | 2025-07-12 14:03:01.946490 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-07-12 14:03:01.946500 | orchestrator | Saturday 12 July 2025 13:59:01 +0000 (0:00:04.689) 0:00:51.159 ********* 2025-07-12 14:03:01.946510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.946521 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.946532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.946542 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.946557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.946573 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.946612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.946624 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.946634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.946644 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.946654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.946664 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.946674 | orchestrator | 2025-07-12 14:03:01.946684 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-07-12 14:03:01.946694 | orchestrator | Saturday 12 July 2025 13:59:04 +0000 (0:00:02.812) 0:00:53.972 ********* 2025-07-12 14:03:01.946708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.946725 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.946761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.946773 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.946784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.946794 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.946804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.946814 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.946824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.946834 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.946853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.946870 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.946880 | orchestrator | 2025-07-12 14:03:01.946890 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-07-12 14:03:01.946899 | orchestrator | Saturday 12 July 2025 13:59:07 +0000 (0:00:03.341) 0:00:57.314 ********* 2025-07-12 14:03:01.946909 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.946919 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.946928 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.946938 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.946948 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.946957 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.946967 | orchestrator | 2025-07-12 14:03:01.946977 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-07-12 14:03:01.946991 | orchestrator | Saturday 12 July 2025 13:59:11 +0000 (0:00:03.622) 0:01:00.936 ********* 2025-07-12 14:03:01.947002 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.947011 | orchestrator | 2025-07-12 14:03:01.947021 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-07-12 14:03:01.947031 | orchestrator | Saturday 12 July 2025 13:59:11 +0000 (0:00:00.130) 0:01:01.066 ********* 2025-07-12 14:03:01.947040 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.947050 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.947060 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.947069 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.947079 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.947088 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.947098 | orchestrator | 2025-07-12 14:03:01.947108 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-07-12 14:03:01.947118 | orchestrator | Saturday 12 July 2025 13:59:12 +0000 (0:00:00.878) 0:01:01.945 ********* 2025-07-12 14:03:01.947128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.947138 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.947148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.947163 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.947178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.947188 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.947205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.947216 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.947226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.947237 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.947247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.947257 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.947267 | orchestrator | 2025-07-12 14:03:01.947276 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-07-12 14:03:01.947286 | orchestrator | Saturday 12 July 2025 13:59:15 +0000 (0:00:03.506) 0:01:05.451 ********* 2025-07-12 14:03:01.947302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.947428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.947439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.947457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947467 | orchestrator | 2025-07-12 14:03:01.947477 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-07-12 14:03:01.947487 | orchestrator | Saturday 12 July 2025 13:59:20 +0000 (0:00:04.325) 0:01:09.777 ********* 2025-07-12 14:03:01.947502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.947542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.947573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.947584 | orchestrator | 2025-07-12 14:03:01.947593 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-07-12 14:03:01.947603 | orchestrator | Saturday 12 July 2025 13:59:25 +0000 (0:00:05.646) 0:01:15.424 ********* 2025-07-12 14:03:01.947620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.947631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947647 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.947657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.947678 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.947693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.947703 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.947721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947731 | orchestrator | 2025-07-12 14:03:01.947740 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-07-12 14:03:01.947748 | orchestrator | Saturday 12 July 2025 13:59:29 +0000 (0:00:03.374) 0:01:18.799 ********* 2025-07-12 14:03:01.947756 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.947764 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.947777 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.947786 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:01.947793 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:01.947801 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:01.947809 | orchestrator | 2025-07-12 14:03:01.947817 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-07-12 14:03:01.947825 | orchestrator | Saturday 12 July 2025 13:59:32 +0000 (0:00:03.216) 0:01:22.015 ********* 2025-07-12 14:03:01.947833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.947842 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.947850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.947858 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.947870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.947879 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.947892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.947924 | orchestrator | 2025-07-12 14:03:01.947932 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-07-12 14:03:01.947940 | orchestrator | Saturday 12 July 2025 13:59:37 +0000 (0:00:04.775) 0:01:26.790 ********* 2025-07-12 14:03:01.947948 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.947956 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.947964 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.947972 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.947980 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.947988 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.947996 | orchestrator | 2025-07-12 14:03:01.948004 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-07-12 14:03:01.948012 | orchestrator | Saturday 12 July 2025 13:59:39 +0000 (0:00:02.497) 0:01:29.288 ********* 2025-07-12 14:03:01.948020 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948028 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948035 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948043 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948051 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.948059 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948067 | orchestrator | 2025-07-12 14:03:01.948075 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-07-12 14:03:01.948083 | orchestrator | Saturday 12 July 2025 13:59:42 +0000 (0:00:03.167) 0:01:32.456 ********* 2025-07-12 14:03:01.948091 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948105 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948114 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.948121 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948129 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948137 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948145 | orchestrator | 2025-07-12 14:03:01.948153 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-07-12 14:03:01.948161 | orchestrator | Saturday 12 July 2025 13:59:44 +0000 (0:00:01.986) 0:01:34.442 ********* 2025-07-12 14:03:01.948169 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948177 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948185 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948197 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948205 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.948213 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948221 | orchestrator | 2025-07-12 14:03:01.948229 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-07-12 14:03:01.948237 | orchestrator | Saturday 12 July 2025 13:59:47 +0000 (0:00:02.792) 0:01:37.235 ********* 2025-07-12 14:03:01.948245 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948253 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.948261 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948269 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948277 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948285 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948293 | orchestrator | 2025-07-12 14:03:01.948304 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-07-12 14:03:01.948332 | orchestrator | Saturday 12 July 2025 13:59:49 +0000 (0:00:02.211) 0:01:39.446 ********* 2025-07-12 14:03:01.948340 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.948348 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948356 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948364 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948372 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948380 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948388 | orchestrator | 2025-07-12 14:03:01.948396 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-07-12 14:03:01.948404 | orchestrator | Saturday 12 July 2025 13:59:51 +0000 (0:00:01.926) 0:01:41.373 ********* 2025-07-12 14:03:01.948412 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:01.948420 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948428 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:01.948435 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948443 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:01.948451 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948459 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:01.948467 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948475 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:01.948483 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.948491 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 14:03:01.948499 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948507 | orchestrator | 2025-07-12 14:03:01.948515 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-07-12 14:03:01.948523 | orchestrator | Saturday 12 July 2025 13:59:53 +0000 (0:00:02.112) 0:01:43.485 ********* 2025-07-12 14:03:01.948531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.948544 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.948565 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.948587 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.948604 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.948612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.948620 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.948641 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948649 | orchestrator | 2025-07-12 14:03:01.948657 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-07-12 14:03:01.948665 | orchestrator | Saturday 12 July 2025 13:59:56 +0000 (0:00:02.308) 0:01:45.793 ********* 2025-07-12 14:03:01.948677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.948686 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.948707 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.948724 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.948745 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.948753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.948762 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.948782 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948790 | orchestrator | 2025-07-12 14:03:01.948798 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-07-12 14:03:01.948806 | orchestrator | Saturday 12 July 2025 13:59:59 +0000 (0:00:03.269) 0:01:49.063 ********* 2025-07-12 14:03:01.948814 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948821 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948829 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948837 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.948845 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948857 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948865 | orchestrator | 2025-07-12 14:03:01.948874 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-07-12 14:03:01.948881 | orchestrator | Saturday 12 July 2025 14:00:02 +0000 (0:00:03.085) 0:01:52.148 ********* 2025-07-12 14:03:01.948889 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948897 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948905 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.948913 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:01.948921 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:01.948929 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:01.948937 | orchestrator | 2025-07-12 14:03:01.948945 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-07-12 14:03:01.948953 | orchestrator | Saturday 12 July 2025 14:00:07 +0000 (0:00:04.752) 0:01:56.900 ********* 2025-07-12 14:03:01.948961 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.948969 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.948976 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.948984 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.948992 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949000 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949008 | orchestrator | 2025-07-12 14:03:01.949022 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-07-12 14:03:01.949030 | orchestrator | Saturday 12 July 2025 14:00:09 +0000 (0:00:02.122) 0:01:59.023 ********* 2025-07-12 14:03:01.949038 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.949046 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.949054 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.949062 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949070 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949078 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.949085 | orchestrator | 2025-07-12 14:03:01.949093 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-07-12 14:03:01.949101 | orchestrator | Saturday 12 July 2025 14:00:12 +0000 (0:00:02.813) 0:02:01.836 ********* 2025-07-12 14:03:01.949109 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.949117 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.949125 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949133 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.949141 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949149 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.949157 | orchestrator | 2025-07-12 14:03:01.949165 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-07-12 14:03:01.949173 | orchestrator | Saturday 12 July 2025 14:00:14 +0000 (0:00:02.658) 0:02:04.495 ********* 2025-07-12 14:03:01.949181 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.949189 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949197 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949205 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.949213 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.949221 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.949228 | orchestrator | 2025-07-12 14:03:01.949236 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-07-12 14:03:01.949244 | orchestrator | Saturday 12 July 2025 14:00:17 +0000 (0:00:02.915) 0:02:07.411 ********* 2025-07-12 14:03:01.949252 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.949260 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949268 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.949276 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.949284 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949292 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.949300 | orchestrator | 2025-07-12 14:03:01.949324 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-07-12 14:03:01.949333 | orchestrator | Saturday 12 July 2025 14:00:20 +0000 (0:00:02.717) 0:02:10.128 ********* 2025-07-12 14:03:01.949341 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.949349 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.949357 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949365 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.949373 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949381 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.949389 | orchestrator | 2025-07-12 14:03:01.949396 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-07-12 14:03:01.949404 | orchestrator | Saturday 12 July 2025 14:00:22 +0000 (0:00:01.891) 0:02:12.020 ********* 2025-07-12 14:03:01.949412 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.949420 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.949428 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949440 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949448 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.949456 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.949464 | orchestrator | 2025-07-12 14:03:01.949472 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-07-12 14:03:01.949480 | orchestrator | Saturday 12 July 2025 14:00:25 +0000 (0:00:02.837) 0:02:14.858 ********* 2025-07-12 14:03:01.949493 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.949501 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949509 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949517 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.949525 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.949533 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.949541 | orchestrator | 2025-07-12 14:03:01.949549 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-07-12 14:03:01.949557 | orchestrator | Saturday 12 July 2025 14:00:27 +0000 (0:00:02.423) 0:02:17.282 ********* 2025-07-12 14:03:01.949565 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:01.949573 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.949581 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:01.949666 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949680 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:01.949688 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.949696 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:01.949705 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.949712 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:01.949720 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.949728 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 14:03:01.949736 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949744 | orchestrator | 2025-07-12 14:03:01.949755 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-07-12 14:03:01.949769 | orchestrator | Saturday 12 July 2025 14:00:30 +0000 (0:00:02.711) 0:02:19.993 ********* 2025-07-12 14:03:01.949778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.949787 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.949795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.949803 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.949823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 14:03:01.949832 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.949844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.949853 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.949861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.949870 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.949878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 14:03:01.949886 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.949894 | orchestrator | 2025-07-12 14:03:01.949902 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-07-12 14:03:01.949910 | orchestrator | Saturday 12 July 2025 14:00:34 +0000 (0:00:03.676) 0:02:23.670 ********* 2025-07-12 14:03:01.949918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.949935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.949950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.949959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.949967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 14:03:01.949976 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 14:03:01.949991 | orchestrator | 2025-07-12 14:03:01.950000 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 14:03:01.950008 | orchestrator | Saturday 12 July 2025 14:00:37 +0000 (0:00:03.552) 0:02:27.223 ********* 2025-07-12 14:03:01.950043 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:01.950053 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:01.950061 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:01.950069 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:01.950077 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:01.950085 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:01.950092 | orchestrator | 2025-07-12 14:03:01.950104 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-07-12 14:03:01.950112 | orchestrator | Saturday 12 July 2025 14:00:38 +0000 (0:00:00.517) 0:02:27.740 ********* 2025-07-12 14:03:01.950120 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:01.950128 | orchestrator | 2025-07-12 14:03:01.950136 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-07-12 14:03:01.950144 | orchestrator | Saturday 12 July 2025 14:00:40 +0000 (0:00:02.306) 0:02:30.046 ********* 2025-07-12 14:03:01.950152 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:01.950160 | orchestrator | 2025-07-12 14:03:01.950168 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-07-12 14:03:01.950176 | orchestrator | Saturday 12 July 2025 14:00:43 +0000 (0:00:02.579) 0:02:32.625 ********* 2025-07-12 14:03:01.950184 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:01.950192 | orchestrator | 2025-07-12 14:03:01.950200 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:01.950207 | orchestrator | Saturday 12 July 2025 14:01:24 +0000 (0:00:41.328) 0:03:13.954 ********* 2025-07-12 14:03:01.950215 | orchestrator | 2025-07-12 14:03:01.950223 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:01.950231 | orchestrator | Saturday 12 July 2025 14:01:24 +0000 (0:00:00.067) 0:03:14.022 ********* 2025-07-12 14:03:01.950239 | orchestrator | 2025-07-12 14:03:01.950247 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:01.950259 | orchestrator | Saturday 12 July 2025 14:01:24 +0000 (0:00:00.299) 0:03:14.321 ********* 2025-07-12 14:03:01.950268 | orchestrator | 2025-07-12 14:03:01.950276 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:01.950283 | orchestrator | Saturday 12 July 2025 14:01:24 +0000 (0:00:00.072) 0:03:14.394 ********* 2025-07-12 14:03:01.950291 | orchestrator | 2025-07-12 14:03:01.950299 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:01.950307 | orchestrator | Saturday 12 July 2025 14:01:24 +0000 (0:00:00.067) 0:03:14.462 ********* 2025-07-12 14:03:01.950393 | orchestrator | 2025-07-12 14:03:01.950401 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 14:03:01.950409 | orchestrator | Saturday 12 July 2025 14:01:25 +0000 (0:00:00.071) 0:03:14.533 ********* 2025-07-12 14:03:01.950418 | orchestrator | 2025-07-12 14:03:01.950426 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-07-12 14:03:01.950434 | orchestrator | Saturday 12 July 2025 14:01:25 +0000 (0:00:00.069) 0:03:14.603 ********* 2025-07-12 14:03:01.950442 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:01.950456 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:01.950464 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:01.950472 | orchestrator | 2025-07-12 14:03:01.950480 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-07-12 14:03:01.950488 | orchestrator | Saturday 12 July 2025 14:01:55 +0000 (0:00:30.165) 0:03:44.768 ********* 2025-07-12 14:03:01.950496 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:01.950504 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:01.950512 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:01.950520 | orchestrator | 2025-07-12 14:03:01.950528 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:03:01.950537 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 14:03:01.950546 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 14:03:01.950554 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 14:03:01.950562 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 14:03:01.950570 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 14:03:01.950578 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 14:03:01.950586 | orchestrator | 2025-07-12 14:03:01.950594 | orchestrator | 2025-07-12 14:03:01.950602 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:03:01.950610 | orchestrator | Saturday 12 July 2025 14:02:58 +0000 (0:01:03.684) 0:04:48.453 ********* 2025-07-12 14:03:01.950618 | orchestrator | =============================================================================== 2025-07-12 14:03:01.950626 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 63.69s 2025-07-12 14:03:01.950634 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.33s 2025-07-12 14:03:01.950641 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.17s 2025-07-12 14:03:01.950649 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.04s 2025-07-12 14:03:01.950655 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.79s 2025-07-12 14:03:01.950662 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.65s 2025-07-12 14:03:01.950669 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.78s 2025-07-12 14:03:01.950675 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.75s 2025-07-12 14:03:01.950686 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.69s 2025-07-12 14:03:01.950693 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.33s 2025-07-12 14:03:01.950700 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.68s 2025-07-12 14:03:01.950706 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.62s 2025-07-12 14:03:01.950713 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.55s 2025-07-12 14:03:01.950720 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.54s 2025-07-12 14:03:01.950726 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.51s 2025-07-12 14:03:01.950733 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.37s 2025-07-12 14:03:01.950740 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.34s 2025-07-12 14:03:01.950752 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 3.27s 2025-07-12 14:03:01.950759 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.22s 2025-07-12 14:03:01.950766 | orchestrator | Setting sysctl values --------------------------------------------------- 3.21s 2025-07-12 14:03:01.950777 | orchestrator | 2025-07-12 14:03:01 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:01.950784 | orchestrator | 2025-07-12 14:03:01 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:01.950791 | orchestrator | 2025-07-12 14:03:01 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:01.950798 | orchestrator | 2025-07-12 14:03:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:04.974589 | orchestrator | 2025-07-12 14:03:04 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:04.974814 | orchestrator | 2025-07-12 14:03:04 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:04.975399 | orchestrator | 2025-07-12 14:03:04 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:04.975937 | orchestrator | 2025-07-12 14:03:04 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:04.975960 | orchestrator | 2025-07-12 14:03:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:07.999539 | orchestrator | 2025-07-12 14:03:07 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:08.002506 | orchestrator | 2025-07-12 14:03:08 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:08.002818 | orchestrator | 2025-07-12 14:03:08 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:08.003531 | orchestrator | 2025-07-12 14:03:08 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:08.003562 | orchestrator | 2025-07-12 14:03:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:11.027081 | orchestrator | 2025-07-12 14:03:11 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:11.027484 | orchestrator | 2025-07-12 14:03:11 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:11.029180 | orchestrator | 2025-07-12 14:03:11 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:11.030485 | orchestrator | 2025-07-12 14:03:11 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:11.030562 | orchestrator | 2025-07-12 14:03:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:14.054668 | orchestrator | 2025-07-12 14:03:14 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:14.054861 | orchestrator | 2025-07-12 14:03:14 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:14.055211 | orchestrator | 2025-07-12 14:03:14 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:14.055818 | orchestrator | 2025-07-12 14:03:14 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:14.055834 | orchestrator | 2025-07-12 14:03:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:17.080056 | orchestrator | 2025-07-12 14:03:17 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:17.080171 | orchestrator | 2025-07-12 14:03:17 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:17.080540 | orchestrator | 2025-07-12 14:03:17 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:17.082483 | orchestrator | 2025-07-12 14:03:17 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:17.082508 | orchestrator | 2025-07-12 14:03:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:20.105154 | orchestrator | 2025-07-12 14:03:20 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:20.108833 | orchestrator | 2025-07-12 14:03:20 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:20.108868 | orchestrator | 2025-07-12 14:03:20 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:20.113055 | orchestrator | 2025-07-12 14:03:20 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:20.113090 | orchestrator | 2025-07-12 14:03:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:23.144985 | orchestrator | 2025-07-12 14:03:23 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:23.145077 | orchestrator | 2025-07-12 14:03:23 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:23.145546 | orchestrator | 2025-07-12 14:03:23 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:23.147912 | orchestrator | 2025-07-12 14:03:23 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:23.147936 | orchestrator | 2025-07-12 14:03:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:26.174260 | orchestrator | 2025-07-12 14:03:26 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:26.174448 | orchestrator | 2025-07-12 14:03:26 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:26.175071 | orchestrator | 2025-07-12 14:03:26 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:26.176385 | orchestrator | 2025-07-12 14:03:26 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:26.176409 | orchestrator | 2025-07-12 14:03:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:29.211175 | orchestrator | 2025-07-12 14:03:29 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:29.213197 | orchestrator | 2025-07-12 14:03:29 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:29.216662 | orchestrator | 2025-07-12 14:03:29 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:29.217081 | orchestrator | 2025-07-12 14:03:29 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:29.217096 | orchestrator | 2025-07-12 14:03:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:32.255000 | orchestrator | 2025-07-12 14:03:32 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:32.255096 | orchestrator | 2025-07-12 14:03:32 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:32.256137 | orchestrator | 2025-07-12 14:03:32 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:32.256576 | orchestrator | 2025-07-12 14:03:32 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:32.256610 | orchestrator | 2025-07-12 14:03:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:35.293600 | orchestrator | 2025-07-12 14:03:35 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:35.295667 | orchestrator | 2025-07-12 14:03:35 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:35.298235 | orchestrator | 2025-07-12 14:03:35 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:35.300384 | orchestrator | 2025-07-12 14:03:35 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:35.300446 | orchestrator | 2025-07-12 14:03:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:38.344859 | orchestrator | 2025-07-12 14:03:38 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:38.348562 | orchestrator | 2025-07-12 14:03:38 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:38.350426 | orchestrator | 2025-07-12 14:03:38 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:38.352095 | orchestrator | 2025-07-12 14:03:38 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:38.352330 | orchestrator | 2025-07-12 14:03:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:41.407909 | orchestrator | 2025-07-12 14:03:41 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:41.408419 | orchestrator | 2025-07-12 14:03:41 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:41.410572 | orchestrator | 2025-07-12 14:03:41 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:41.412136 | orchestrator | 2025-07-12 14:03:41 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:41.412160 | orchestrator | 2025-07-12 14:03:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:44.465847 | orchestrator | 2025-07-12 14:03:44 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:44.467627 | orchestrator | 2025-07-12 14:03:44 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:44.469300 | orchestrator | 2025-07-12 14:03:44 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:44.470807 | orchestrator | 2025-07-12 14:03:44 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:44.470833 | orchestrator | 2025-07-12 14:03:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:47.515518 | orchestrator | 2025-07-12 14:03:47 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:47.515612 | orchestrator | 2025-07-12 14:03:47 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:47.516356 | orchestrator | 2025-07-12 14:03:47 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:47.517610 | orchestrator | 2025-07-12 14:03:47 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:47.517764 | orchestrator | 2025-07-12 14:03:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:50.561797 | orchestrator | 2025-07-12 14:03:50 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:50.562913 | orchestrator | 2025-07-12 14:03:50 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:50.564605 | orchestrator | 2025-07-12 14:03:50 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:50.565800 | orchestrator | 2025-07-12 14:03:50 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:50.565852 | orchestrator | 2025-07-12 14:03:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:53.607161 | orchestrator | 2025-07-12 14:03:53 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:53.609727 | orchestrator | 2025-07-12 14:03:53 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:53.612298 | orchestrator | 2025-07-12 14:03:53 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:53.614679 | orchestrator | 2025-07-12 14:03:53 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:53.614719 | orchestrator | 2025-07-12 14:03:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:56.656993 | orchestrator | 2025-07-12 14:03:56 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:56.658517 | orchestrator | 2025-07-12 14:03:56 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:56.660648 | orchestrator | 2025-07-12 14:03:56 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state STARTED 2025-07-12 14:03:56.662098 | orchestrator | 2025-07-12 14:03:56 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:56.662133 | orchestrator | 2025-07-12 14:03:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:03:59.703230 | orchestrator | 2025-07-12 14:03:59 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:03:59.705367 | orchestrator | 2025-07-12 14:03:59 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:03:59.707441 | orchestrator | 2025-07-12 14:03:59 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:03:59.712635 | orchestrator | 2025-07-12 14:03:59 | INFO  | Task 3862bf3e-b70a-494b-9efe-fb6417075119 is in state SUCCESS 2025-07-12 14:03:59.715005 | orchestrator | 2025-07-12 14:03:59.715040 | orchestrator | 2025-07-12 14:03:59.715070 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:03:59.715083 | orchestrator | 2025-07-12 14:03:59.715095 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:03:59.715106 | orchestrator | Saturday 12 July 2025 14:00:27 +0000 (0:00:00.210) 0:00:00.210 ********* 2025-07-12 14:03:59.715117 | orchestrator | ok: [testbed-manager] 2025-07-12 14:03:59.715130 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:03:59.715140 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:03:59.715151 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:03:59.715162 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:03:59.715173 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:03:59.715263 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:03:59.715279 | orchestrator | 2025-07-12 14:03:59.715290 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:03:59.715323 | orchestrator | Saturday 12 July 2025 14:00:29 +0000 (0:00:01.381) 0:00:01.591 ********* 2025-07-12 14:03:59.715335 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-07-12 14:03:59.715347 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-07-12 14:03:59.715358 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-07-12 14:03:59.715369 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-07-12 14:03:59.715380 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-07-12 14:03:59.715391 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-07-12 14:03:59.715402 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-07-12 14:03:59.715413 | orchestrator | 2025-07-12 14:03:59.715424 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-07-12 14:03:59.715460 | orchestrator | 2025-07-12 14:03:59.715471 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 14:03:59.715483 | orchestrator | Saturday 12 July 2025 14:00:30 +0000 (0:00:01.053) 0:00:02.645 ********* 2025-07-12 14:03:59.715495 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:03:59.715508 | orchestrator | 2025-07-12 14:03:59.715519 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-07-12 14:03:59.716035 | orchestrator | Saturday 12 July 2025 14:00:32 +0000 (0:00:02.682) 0:00:05.327 ********* 2025-07-12 14:03:59.716054 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 14:03:59.716071 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.716085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.716096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.716129 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716171 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 14:03:59.716281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.716342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.716355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716386 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.716421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.716468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.716634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.716648 | orchestrator | 2025-07-12 14:03:59.717189 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 14:03:59.717208 | orchestrator | Saturday 12 July 2025 14:00:37 +0000 (0:00:04.382) 0:00:09.709 ********* 2025-07-12 14:03:59.717221 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:03:59.717233 | orchestrator | 2025-07-12 14:03:59.717245 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-07-12 14:03:59.717257 | orchestrator | Saturday 12 July 2025 14:00:38 +0000 (0:00:01.205) 0:00:10.915 ********* 2025-07-12 14:03:59.717269 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 14:03:59.717282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.717294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.717330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.717380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.717404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.717416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.717429 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.717441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.717453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.717465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.717478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.717527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.717541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.717554 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.717566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.717578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.717590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.717602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.717614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.717670 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 14:03:59.717685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.717698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.717709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.717721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.718219 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.718244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.718384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.718403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.718415 | orchestrator | 2025-07-12 14:03:59.718426 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-07-12 14:03:59.718438 | orchestrator | Saturday 12 July 2025 14:00:45 +0000 (0:00:06.583) 0:00:17.499 ********* 2025-07-12 14:03:59.718450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.718461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.718473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.718484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.718505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.718517 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.718602 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 14:03:59.718620 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.718631 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.718643 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 14:03:59.718656 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.718675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.718687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.718734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.718892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.718908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.718920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.718932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.718943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719231 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:59.719244 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.719257 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.719337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.719353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719377 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.719388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.719399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.719418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719506 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.719517 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.719529 | orchestrator | 2025-07-12 14:03:59.719540 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-07-12 14:03:59.719551 | orchestrator | Saturday 12 July 2025 14:00:47 +0000 (0:00:02.647) 0:00:20.147 ********* 2025-07-12 14:03:59.719563 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 14:03:59.719575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.719595 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719607 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 14:03:59.719620 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.719682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.719746 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:59.719758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719781 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.719831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719857 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.719868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.719886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.719921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 14:03:59.719932 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.719974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.719993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.720005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.720016 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.720034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.720046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.720057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.720069 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.720080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 14:03:59.720092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.720139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 14:03:59.720153 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.720164 | orchestrator | 2025-07-12 14:03:59.720175 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-07-12 14:03:59.720186 | orchestrator | Saturday 12 July 2025 14:00:50 +0000 (0:00:02.343) 0:00:22.490 ********* 2025-07-12 14:03:59.720197 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 14:03:59.720215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.720227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.720238 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.720249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.720261 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.720328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.720344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.720363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720375 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720386 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720503 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 14:03:59.720515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720550 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720628 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.720662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.720697 | orchestrator | 2025-07-12 14:03:59.720708 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-07-12 14:03:59.720719 | orchestrator | Saturday 12 July 2025 14:00:56 +0000 (0:00:06.382) 0:00:28.873 ********* 2025-07-12 14:03:59.720731 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:59.720748 | orchestrator | 2025-07-12 14:03:59.720759 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-07-12 14:03:59.720799 | orchestrator | Saturday 12 July 2025 14:00:57 +0000 (0:00:00.817) 0:00:29.691 ********* 2025-07-12 14:03:59.720818 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094854, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720831 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094854, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720843 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094854, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720855 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094854, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720866 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094854, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.720878 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094838, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720919 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094838, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720950 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094854, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720962 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094812, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720974 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094854, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720986 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094838, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.720997 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094838, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721009 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094815, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721057 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094838, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.721076 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094812, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721087 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094838, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721099 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094838, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721110 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094812, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721122 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094812, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721133 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094812, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721151 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094812, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721197 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094830, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721211 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094815, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721222 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094815, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721234 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094812, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.721245 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094815, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721257 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094830, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721274 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094815, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721341 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094830, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721356 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094818, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4568303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721368 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094815, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721379 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094830, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721391 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094818, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4568303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721403 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094828, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721422 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094815, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4548302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.721469 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094818, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4568303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721483 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094818, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4568303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721495 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094830, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721507 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094828, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721518 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094830, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721536 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094840, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721548 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094840, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721593 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094818, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4568303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721606 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094828, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721618 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094828, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721630 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094830, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.721642 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094850, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721659 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094818, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4568303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721671 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094850, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721712 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094840, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721726 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094868, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4728305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721737 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094828, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721782 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094840, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721795 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094850, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721813 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094845, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721825 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094868, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4728305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721873 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094828, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721887 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094816, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4558303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721899 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094840, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721911 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094850, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721922 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094818, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4568303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.721943 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094868, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4728305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.721955 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094826, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722000 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094840, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722067 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094845, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722082 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094868, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4728305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722094 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094850, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722113 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094850, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722125 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094845, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722137 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094868, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4728305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722188 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094845, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722202 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094816, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4558303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722214 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094811, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4538302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722225 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094828, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4588304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.722244 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094868, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4728305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722255 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094826, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722267 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094816, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4558303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722289 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094845, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722362 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094816, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4558303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722376 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094833, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4628303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722388 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094811, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4538302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722407 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094845, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722418 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094840, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4648304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.722430 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094826, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722454 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094816, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4558303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722466 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094826, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722478 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094816, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4558303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722496 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094811, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4538302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722508 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094866, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4718304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722519 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094826, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722531 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094833, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4628303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722553 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094850, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.722565 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094811, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4538302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722577 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094811, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4538302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722594 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094821, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722606 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094833, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4628303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722617 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094826, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722629 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094866, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4718304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722651 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094833, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4628303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722663 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094866, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4718304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722675 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094833, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4628303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722693 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094811, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4538302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722704 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094857, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722715 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.722727 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094821, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722739 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094866, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4718304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722760 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094857, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722772 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.722783 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094868, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4728305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.722802 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094866, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4718304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722813 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094821, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722825 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094833, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4628303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722836 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094821, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722848 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094821, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722872 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094866, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4718304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722884 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094857, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722908 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.722919 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094857, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722929 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.722939 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094857, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722949 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.722960 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094821, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722970 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094845, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4668305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.722980 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094857, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 14:03:59.722990 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.723011 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094816, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4558303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.723030 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094826, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.723040 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094811, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4538302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.723050 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094833, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4628303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.723060 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094866, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4718304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.723071 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094821, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4578302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.723081 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094857, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4688303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 14:03:59.723091 | orchestrator | 2025-07-12 14:03:59.723101 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-07-12 14:03:59.723111 | orchestrator | Saturday 12 July 2025 14:01:20 +0000 (0:00:22.788) 0:00:52.480 ********* 2025-07-12 14:03:59.723121 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:59.723131 | orchestrator | 2025-07-12 14:03:59.723141 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-07-12 14:03:59.723160 | orchestrator | Saturday 12 July 2025 14:01:20 +0000 (0:00:00.708) 0:00:53.189 ********* 2025-07-12 14:03:59.723171 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:59.723185 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723196 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:59.723205 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723215 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-07-12 14:03:59.723225 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:59.723235 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:59.723245 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723254 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:59.723264 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723274 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-07-12 14:03:59.723284 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:03:59.723294 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:59.723399 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723420 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:59.723436 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723453 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-07-12 14:03:59.723464 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:59.723474 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723484 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:59.723493 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723503 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-07-12 14:03:59.723512 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:59.723522 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723532 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:59.723541 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723551 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-07-12 14:03:59.723561 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:59.723570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723580 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:59.723590 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723599 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-07-12 14:03:59.723609 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:59.723619 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723628 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-07-12 14:03:59.723638 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 14:03:59.723647 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-07-12 14:03:59.723657 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 14:03:59.723667 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 14:03:59.723676 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 14:03:59.723686 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 14:03:59.723696 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 14:03:59.723706 | orchestrator | 2025-07-12 14:03:59.723715 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-07-12 14:03:59.723733 | orchestrator | Saturday 12 July 2025 14:01:22 +0000 (0:00:01.897) 0:00:55.086 ********* 2025-07-12 14:03:59.723743 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:59.723753 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:59.723763 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.723772 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.723782 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:59.723792 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.723801 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:59.723811 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.723821 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:59.723831 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.723840 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 14:03:59.723850 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.723859 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-07-12 14:03:59.723869 | orchestrator | 2025-07-12 14:03:59.723879 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-07-12 14:03:59.723888 | orchestrator | Saturday 12 July 2025 14:01:37 +0000 (0:00:14.827) 0:01:09.914 ********* 2025-07-12 14:03:59.723898 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:59.723961 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:59.723973 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.723989 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.723999 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:59.724009 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.724018 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:59.724028 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.724038 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:59.724048 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.724058 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 14:03:59.724068 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.724077 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-07-12 14:03:59.724087 | orchestrator | 2025-07-12 14:03:59.724097 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-07-12 14:03:59.724107 | orchestrator | Saturday 12 July 2025 14:01:41 +0000 (0:00:03.701) 0:01:13.615 ********* 2025-07-12 14:03:59.724117 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:59.724127 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.724137 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:59.724147 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.724157 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-07-12 14:03:59.724167 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:59.724183 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.724193 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:59.724203 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.724213 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:59.724223 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.724232 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 14:03:59.724242 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.724252 | orchestrator | 2025-07-12 14:03:59.724262 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-07-12 14:03:59.724272 | orchestrator | Saturday 12 July 2025 14:01:43 +0000 (0:00:01.763) 0:01:15.379 ********* 2025-07-12 14:03:59.724281 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:59.724291 | orchestrator | 2025-07-12 14:03:59.724301 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-07-12 14:03:59.724328 | orchestrator | Saturday 12 July 2025 14:01:43 +0000 (0:00:00.779) 0:01:16.158 ********* 2025-07-12 14:03:59.724338 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:59.724348 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.724358 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.724368 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.724377 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.724387 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.724397 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.724407 | orchestrator | 2025-07-12 14:03:59.724417 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-07-12 14:03:59.724427 | orchestrator | Saturday 12 July 2025 14:01:44 +0000 (0:00:00.882) 0:01:17.041 ********* 2025-07-12 14:03:59.724437 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:59.724447 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.724456 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.724466 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.724476 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:59.724486 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:59.724496 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:59.724505 | orchestrator | 2025-07-12 14:03:59.724515 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-07-12 14:03:59.724525 | orchestrator | Saturday 12 July 2025 14:01:46 +0000 (0:00:02.254) 0:01:19.295 ********* 2025-07-12 14:03:59.724535 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:59.724545 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:59.724555 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:59.724564 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.724574 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:59.724584 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.724594 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:59.724604 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:59.724614 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:59.724629 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.724639 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.724653 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.724663 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 14:03:59.724679 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.724689 | orchestrator | 2025-07-12 14:03:59.724699 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-07-12 14:03:59.724709 | orchestrator | Saturday 12 July 2025 14:01:48 +0000 (0:00:01.562) 0:01:20.858 ********* 2025-07-12 14:03:59.724719 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:59.724729 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.724739 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:59.724748 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.724758 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:59.724768 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.724778 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:59.724788 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.724798 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-07-12 14:03:59.724808 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:59.724817 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 14:03:59.724827 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.724837 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.724847 | orchestrator | 2025-07-12 14:03:59.724857 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-07-12 14:03:59.724866 | orchestrator | Saturday 12 July 2025 14:01:50 +0000 (0:00:01.535) 0:01:22.394 ********* 2025-07-12 14:03:59.724876 | orchestrator | [WARNING]: Skipped 2025-07-12 14:03:59.724886 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-07-12 14:03:59.724896 | orchestrator | due to this access issue: 2025-07-12 14:03:59.724906 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-07-12 14:03:59.724915 | orchestrator | not a directory 2025-07-12 14:03:59.724925 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 14:03:59.724935 | orchestrator | 2025-07-12 14:03:59.724945 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-07-12 14:03:59.724955 | orchestrator | Saturday 12 July 2025 14:01:51 +0000 (0:00:01.339) 0:01:23.733 ********* 2025-07-12 14:03:59.724965 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:59.724974 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.724984 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.724994 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.725004 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.725014 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.725023 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.725033 | orchestrator | 2025-07-12 14:03:59.725043 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-07-12 14:03:59.725053 | orchestrator | Saturday 12 July 2025 14:01:52 +0000 (0:00:01.338) 0:01:25.072 ********* 2025-07-12 14:03:59.725063 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:59.725072 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:03:59.725082 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:03:59.725092 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:03:59.725101 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:03:59.725111 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:03:59.725121 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:03:59.725130 | orchestrator | 2025-07-12 14:03:59.725140 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-07-12 14:03:59.725156 | orchestrator | Saturday 12 July 2025 14:01:53 +0000 (0:00:00.829) 0:01:25.901 ********* 2025-07-12 14:03:59.725167 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 14:03:59.725187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.725198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.725209 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.725219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.725230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.725240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.725256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 14:03:59.725266 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 14:03:59.725395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725464 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 14:03:59.725515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 14:03:59.725551 | orchestrator | 2025-07-12 14:03:59.725561 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-07-12 14:03:59.725571 | orchestrator | Saturday 12 July 2025 14:01:58 +0000 (0:00:04.731) 0:01:30.633 ********* 2025-07-12 14:03:59.725581 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 14:03:59.725591 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:03:59.725601 | orchestrator | 2025-07-12 14:03:59.725611 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:59.725621 | orchestrator | Saturday 12 July 2025 14:01:59 +0000 (0:00:01.194) 0:01:31.827 ********* 2025-07-12 14:03:59.725630 | orchestrator | 2025-07-12 14:03:59.725640 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:59.725650 | orchestrator | Saturday 12 July 2025 14:01:59 +0000 (0:00:00.192) 0:01:32.020 ********* 2025-07-12 14:03:59.725660 | orchestrator | 2025-07-12 14:03:59.725669 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:59.725679 | orchestrator | Saturday 12 July 2025 14:01:59 +0000 (0:00:00.070) 0:01:32.091 ********* 2025-07-12 14:03:59.725689 | orchestrator | 2025-07-12 14:03:59.725699 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:59.725709 | orchestrator | Saturday 12 July 2025 14:01:59 +0000 (0:00:00.066) 0:01:32.157 ********* 2025-07-12 14:03:59.725718 | orchestrator | 2025-07-12 14:03:59.725728 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:59.725738 | orchestrator | Saturday 12 July 2025 14:01:59 +0000 (0:00:00.065) 0:01:32.223 ********* 2025-07-12 14:03:59.725748 | orchestrator | 2025-07-12 14:03:59.725758 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:59.725767 | orchestrator | Saturday 12 July 2025 14:01:59 +0000 (0:00:00.066) 0:01:32.289 ********* 2025-07-12 14:03:59.725777 | orchestrator | 2025-07-12 14:03:59.725787 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 14:03:59.725797 | orchestrator | Saturday 12 July 2025 14:01:59 +0000 (0:00:00.071) 0:01:32.360 ********* 2025-07-12 14:03:59.725807 | orchestrator | 2025-07-12 14:03:59.725817 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-07-12 14:03:59.725826 | orchestrator | Saturday 12 July 2025 14:02:00 +0000 (0:00:00.106) 0:01:32.467 ********* 2025-07-12 14:03:59.725836 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:59.725846 | orchestrator | 2025-07-12 14:03:59.725855 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-07-12 14:03:59.725865 | orchestrator | Saturday 12 July 2025 14:02:34 +0000 (0:00:34.039) 0:02:06.506 ********* 2025-07-12 14:03:59.725875 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:59.725889 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:59.725899 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:59.725915 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:59.725925 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:59.725935 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:59.725944 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:59.725954 | orchestrator | 2025-07-12 14:03:59.725964 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-07-12 14:03:59.725974 | orchestrator | Saturday 12 July 2025 14:02:50 +0000 (0:00:16.324) 0:02:22.830 ********* 2025-07-12 14:03:59.725984 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:59.725993 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:59.726003 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:59.726013 | orchestrator | 2025-07-12 14:03:59.726054 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-07-12 14:03:59.726064 | orchestrator | Saturday 12 July 2025 14:03:00 +0000 (0:00:10.427) 0:02:33.258 ********* 2025-07-12 14:03:59.726080 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:59.726090 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:59.726099 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:59.726109 | orchestrator | 2025-07-12 14:03:59.726118 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-07-12 14:03:59.726128 | orchestrator | Saturday 12 July 2025 14:03:13 +0000 (0:00:12.399) 0:02:45.658 ********* 2025-07-12 14:03:59.726138 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:59.726148 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:59.726157 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:59.726167 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:59.726177 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:59.726186 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:59.726196 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:59.726206 | orchestrator | 2025-07-12 14:03:59.726215 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-07-12 14:03:59.726225 | orchestrator | Saturday 12 July 2025 14:03:27 +0000 (0:00:14.468) 0:03:00.126 ********* 2025-07-12 14:03:59.726235 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:59.726244 | orchestrator | 2025-07-12 14:03:59.726254 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-07-12 14:03:59.726264 | orchestrator | Saturday 12 July 2025 14:03:35 +0000 (0:00:08.152) 0:03:08.278 ********* 2025-07-12 14:03:59.726273 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:03:59.726283 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:03:59.726293 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:03:59.726345 | orchestrator | 2025-07-12 14:03:59.726357 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-07-12 14:03:59.726367 | orchestrator | Saturday 12 July 2025 14:03:41 +0000 (0:00:05.847) 0:03:14.125 ********* 2025-07-12 14:03:59.726377 | orchestrator | changed: [testbed-manager] 2025-07-12 14:03:59.726387 | orchestrator | 2025-07-12 14:03:59.726397 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-07-12 14:03:59.726407 | orchestrator | Saturday 12 July 2025 14:03:47 +0000 (0:00:05.341) 0:03:19.467 ********* 2025-07-12 14:03:59.726416 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:03:59.726426 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:03:59.726436 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:03:59.726446 | orchestrator | 2025-07-12 14:03:59.726455 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:03:59.726466 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 14:03:59.726476 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:03:59.726486 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:03:59.726496 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:03:59.726506 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 14:03:59.726515 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 14:03:59.726523 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 14:03:59.726531 | orchestrator | 2025-07-12 14:03:59.726539 | orchestrator | 2025-07-12 14:03:59.726547 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:03:59.726564 | orchestrator | Saturday 12 July 2025 14:03:57 +0000 (0:00:10.506) 0:03:29.973 ********* 2025-07-12 14:03:59.726572 | orchestrator | =============================================================================== 2025-07-12 14:03:59.726580 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 34.04s 2025-07-12 14:03:59.726588 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.79s 2025-07-12 14:03:59.726596 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.32s 2025-07-12 14:03:59.726604 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.83s 2025-07-12 14:03:59.726612 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.47s 2025-07-12 14:03:59.726620 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.40s 2025-07-12 14:03:59.726633 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.51s 2025-07-12 14:03:59.726646 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.43s 2025-07-12 14:03:59.726654 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.15s 2025-07-12 14:03:59.726662 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.58s 2025-07-12 14:03:59.726670 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.38s 2025-07-12 14:03:59.726678 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.85s 2025-07-12 14:03:59.726686 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.34s 2025-07-12 14:03:59.726694 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.73s 2025-07-12 14:03:59.726702 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.38s 2025-07-12 14:03:59.726710 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.70s 2025-07-12 14:03:59.726718 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.68s 2025-07-12 14:03:59.726726 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.65s 2025-07-12 14:03:59.726734 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.34s 2025-07-12 14:03:59.726742 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.25s 2025-07-12 14:03:59.726750 | orchestrator | 2025-07-12 14:03:59 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:03:59.726758 | orchestrator | 2025-07-12 14:03:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:02.762406 | orchestrator | 2025-07-12 14:04:02 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:02.765933 | orchestrator | 2025-07-12 14:04:02 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:02.767292 | orchestrator | 2025-07-12 14:04:02 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:02.769202 | orchestrator | 2025-07-12 14:04:02 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:02.769230 | orchestrator | 2025-07-12 14:04:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:05.813334 | orchestrator | 2025-07-12 14:04:05 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:05.815259 | orchestrator | 2025-07-12 14:04:05 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:05.817084 | orchestrator | 2025-07-12 14:04:05 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:05.819086 | orchestrator | 2025-07-12 14:04:05 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:05.819118 | orchestrator | 2025-07-12 14:04:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:08.867034 | orchestrator | 2025-07-12 14:04:08 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:08.868263 | orchestrator | 2025-07-12 14:04:08 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:08.869936 | orchestrator | 2025-07-12 14:04:08 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:08.871370 | orchestrator | 2025-07-12 14:04:08 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:08.871405 | orchestrator | 2025-07-12 14:04:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:11.919845 | orchestrator | 2025-07-12 14:04:11 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:11.924558 | orchestrator | 2025-07-12 14:04:11 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:11.924599 | orchestrator | 2025-07-12 14:04:11 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:11.926567 | orchestrator | 2025-07-12 14:04:11 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:11.926595 | orchestrator | 2025-07-12 14:04:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:14.983760 | orchestrator | 2025-07-12 14:04:14 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:14.985725 | orchestrator | 2025-07-12 14:04:14 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:14.987972 | orchestrator | 2025-07-12 14:04:14 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:14.989721 | orchestrator | 2025-07-12 14:04:14 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:14.989900 | orchestrator | 2025-07-12 14:04:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:18.048348 | orchestrator | 2025-07-12 14:04:18 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:18.048552 | orchestrator | 2025-07-12 14:04:18 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:18.052443 | orchestrator | 2025-07-12 14:04:18 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:18.052915 | orchestrator | 2025-07-12 14:04:18 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:18.053051 | orchestrator | 2025-07-12 14:04:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:21.087777 | orchestrator | 2025-07-12 14:04:21 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:21.088222 | orchestrator | 2025-07-12 14:04:21 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:21.089011 | orchestrator | 2025-07-12 14:04:21 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:21.093563 | orchestrator | 2025-07-12 14:04:21 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:21.094260 | orchestrator | 2025-07-12 14:04:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:24.137669 | orchestrator | 2025-07-12 14:04:24 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:24.138362 | orchestrator | 2025-07-12 14:04:24 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:24.140043 | orchestrator | 2025-07-12 14:04:24 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:24.141220 | orchestrator | 2025-07-12 14:04:24 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:24.141250 | orchestrator | 2025-07-12 14:04:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:27.172832 | orchestrator | 2025-07-12 14:04:27 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:27.173628 | orchestrator | 2025-07-12 14:04:27 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:27.174947 | orchestrator | 2025-07-12 14:04:27 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:27.175407 | orchestrator | 2025-07-12 14:04:27 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:27.175507 | orchestrator | 2025-07-12 14:04:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:30.228472 | orchestrator | 2025-07-12 14:04:30 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:30.230190 | orchestrator | 2025-07-12 14:04:30 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:30.231405 | orchestrator | 2025-07-12 14:04:30 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:30.233786 | orchestrator | 2025-07-12 14:04:30 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:30.234569 | orchestrator | 2025-07-12 14:04:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:33.289020 | orchestrator | 2025-07-12 14:04:33 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:33.292760 | orchestrator | 2025-07-12 14:04:33 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:33.292807 | orchestrator | 2025-07-12 14:04:33 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:33.292821 | orchestrator | 2025-07-12 14:04:33 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:33.292832 | orchestrator | 2025-07-12 14:04:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:36.336910 | orchestrator | 2025-07-12 14:04:36 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:36.340118 | orchestrator | 2025-07-12 14:04:36 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:36.343746 | orchestrator | 2025-07-12 14:04:36 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:36.347492 | orchestrator | 2025-07-12 14:04:36 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:36.347515 | orchestrator | 2025-07-12 14:04:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:39.395257 | orchestrator | 2025-07-12 14:04:39 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:39.397593 | orchestrator | 2025-07-12 14:04:39 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:39.399542 | orchestrator | 2025-07-12 14:04:39 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:39.402095 | orchestrator | 2025-07-12 14:04:39 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:39.402121 | orchestrator | 2025-07-12 14:04:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:42.450992 | orchestrator | 2025-07-12 14:04:42 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:42.452605 | orchestrator | 2025-07-12 14:04:42 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:42.454138 | orchestrator | 2025-07-12 14:04:42 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state STARTED 2025-07-12 14:04:42.455429 | orchestrator | 2025-07-12 14:04:42 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:42.455531 | orchestrator | 2025-07-12 14:04:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:45.497760 | orchestrator | 2025-07-12 14:04:45 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:45.503077 | orchestrator | 2025-07-12 14:04:45 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:45.507061 | orchestrator | 2025-07-12 14:04:45 | INFO  | Task 42a80554-3c06-44fc-a9d5-c5cb45a14aa4 is in state SUCCESS 2025-07-12 14:04:45.508525 | orchestrator | 2025-07-12 14:04:45.508557 | orchestrator | 2025-07-12 14:04:45.508567 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:04:45.508576 | orchestrator | 2025-07-12 14:04:45.508584 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:04:45.508593 | orchestrator | Saturday 12 July 2025 14:01:50 +0000 (0:00:00.297) 0:00:00.297 ********* 2025-07-12 14:04:45.508601 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:04:45.508610 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:04:45.508619 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:04:45.508627 | orchestrator | 2025-07-12 14:04:45.508635 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:04:45.508643 | orchestrator | Saturday 12 July 2025 14:01:51 +0000 (0:00:00.309) 0:00:00.606 ********* 2025-07-12 14:04:45.508652 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-07-12 14:04:45.508661 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-07-12 14:04:45.508669 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-07-12 14:04:45.508677 | orchestrator | 2025-07-12 14:04:45.508685 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-07-12 14:04:45.508693 | orchestrator | 2025-07-12 14:04:45.508700 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 14:04:45.508709 | orchestrator | Saturday 12 July 2025 14:01:51 +0000 (0:00:00.414) 0:00:01.021 ********* 2025-07-12 14:04:45.508716 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:04:45.508725 | orchestrator | 2025-07-12 14:04:45.508733 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-07-12 14:04:45.508741 | orchestrator | Saturday 12 July 2025 14:01:52 +0000 (0:00:00.993) 0:00:02.015 ********* 2025-07-12 14:04:45.508749 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-07-12 14:04:45.508757 | orchestrator | 2025-07-12 14:04:45.508765 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-07-12 14:04:45.508773 | orchestrator | Saturday 12 July 2025 14:01:56 +0000 (0:00:03.660) 0:00:05.676 ********* 2025-07-12 14:04:45.508781 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-07-12 14:04:45.508789 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-07-12 14:04:45.508797 | orchestrator | 2025-07-12 14:04:45.508805 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-07-12 14:04:45.508813 | orchestrator | Saturday 12 July 2025 14:02:03 +0000 (0:00:07.176) 0:00:12.852 ********* 2025-07-12 14:04:45.508822 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:04:45.508831 | orchestrator | 2025-07-12 14:04:45.508838 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-07-12 14:04:45.508847 | orchestrator | Saturday 12 July 2025 14:02:06 +0000 (0:00:03.277) 0:00:16.130 ********* 2025-07-12 14:04:45.508876 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:04:45.508903 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-07-12 14:04:45.508912 | orchestrator | 2025-07-12 14:04:45.508920 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-07-12 14:04:45.508928 | orchestrator | Saturday 12 July 2025 14:02:10 +0000 (0:00:04.079) 0:00:20.209 ********* 2025-07-12 14:04:45.508937 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:04:45.508945 | orchestrator | 2025-07-12 14:04:45.508953 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-07-12 14:04:45.508961 | orchestrator | Saturday 12 July 2025 14:02:14 +0000 (0:00:03.809) 0:00:24.019 ********* 2025-07-12 14:04:45.508969 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-07-12 14:04:45.508977 | orchestrator | 2025-07-12 14:04:45.508998 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-07-12 14:04:45.509006 | orchestrator | Saturday 12 July 2025 14:02:18 +0000 (0:00:03.840) 0:00:27.859 ********* 2025-07-12 14:04:45.509032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.509046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.509066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.509077 | orchestrator | 2025-07-12 14:04:45.509085 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 14:04:45.509093 | orchestrator | Saturday 12 July 2025 14:02:21 +0000 (0:00:03.581) 0:00:31.440 ********* 2025-07-12 14:04:45.509101 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:04:45.509109 | orchestrator | 2025-07-12 14:04:45.509122 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-07-12 14:04:45.509132 | orchestrator | Saturday 12 July 2025 14:02:22 +0000 (0:00:00.689) 0:00:32.130 ********* 2025-07-12 14:04:45.509141 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:04:45.509150 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:04:45.509159 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:04:45.509169 | orchestrator | 2025-07-12 14:04:45.509178 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-07-12 14:04:45.509187 | orchestrator | Saturday 12 July 2025 14:02:26 +0000 (0:00:03.812) 0:00:35.942 ********* 2025-07-12 14:04:45.509196 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:04:45.509205 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:04:45.509215 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:04:45.509224 | orchestrator | 2025-07-12 14:04:45.509233 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-07-12 14:04:45.509242 | orchestrator | Saturday 12 July 2025 14:02:27 +0000 (0:00:01.519) 0:00:37.462 ********* 2025-07-12 14:04:45.509251 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:04:45.509267 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:04:45.509276 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:04:45.509284 | orchestrator | 2025-07-12 14:04:45.509333 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-07-12 14:04:45.509344 | orchestrator | Saturday 12 July 2025 14:02:29 +0000 (0:00:01.179) 0:00:38.641 ********* 2025-07-12 14:04:45.509353 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:04:45.509362 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:04:45.509371 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:04:45.509380 | orchestrator | 2025-07-12 14:04:45.509388 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-07-12 14:04:45.509395 | orchestrator | Saturday 12 July 2025 14:02:29 +0000 (0:00:00.854) 0:00:39.495 ********* 2025-07-12 14:04:45.509403 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.509411 | orchestrator | 2025-07-12 14:04:45.509419 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-07-12 14:04:45.509427 | orchestrator | Saturday 12 July 2025 14:02:30 +0000 (0:00:00.135) 0:00:39.631 ********* 2025-07-12 14:04:45.509435 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.509443 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.509451 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.509459 | orchestrator | 2025-07-12 14:04:45.509467 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 14:04:45.509474 | orchestrator | Saturday 12 July 2025 14:02:30 +0000 (0:00:00.302) 0:00:39.934 ********* 2025-07-12 14:04:45.509482 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:04:45.509490 | orchestrator | 2025-07-12 14:04:45.509498 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-07-12 14:04:45.509506 | orchestrator | Saturday 12 July 2025 14:02:30 +0000 (0:00:00.577) 0:00:40.511 ********* 2025-07-12 14:04:45.509524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.509535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.509554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.509563 | orchestrator | 2025-07-12 14:04:45.509571 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-07-12 14:04:45.509579 | orchestrator | Saturday 12 July 2025 14:02:34 +0000 (0:00:03.930) 0:00:44.441 ********* 2025-07-12 14:04:45.509595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:04:45.509609 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.509622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:04:45.509631 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.509645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:04:45.509667 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.509675 | orchestrator | 2025-07-12 14:04:45.509683 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-07-12 14:04:45.509691 | orchestrator | Saturday 12 July 2025 14:02:41 +0000 (0:00:06.965) 0:00:51.406 ********* 2025-07-12 14:04:45.509700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:04:45.509708 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.509727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:04:45.509744 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.509753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 14:04:45.509762 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.509770 | orchestrator | 2025-07-12 14:04:45.509778 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-07-12 14:04:45.509786 | orchestrator | Saturday 12 July 2025 14:02:45 +0000 (0:00:03.721) 0:00:55.128 ********* 2025-07-12 14:04:45.509794 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.509802 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.509810 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.509817 | orchestrator | 2025-07-12 14:04:45.509825 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-07-12 14:04:45.509833 | orchestrator | Saturday 12 July 2025 14:02:48 +0000 (0:00:03.025) 0:00:58.153 ********* 2025-07-12 14:04:45.509850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.509864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.509877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.509891 | orchestrator | 2025-07-12 14:04:45.509899 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-07-12 14:04:45.509907 | orchestrator | Saturday 12 July 2025 14:02:53 +0000 (0:00:04.377) 0:01:02.531 ********* 2025-07-12 14:04:45.509915 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:04:45.509923 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:04:45.509930 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:04:45.509938 | orchestrator | 2025-07-12 14:04:45.509946 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-07-12 14:04:45.509954 | orchestrator | Saturday 12 July 2025 14:02:59 +0000 (0:00:06.025) 0:01:08.556 ********* 2025-07-12 14:04:45.509962 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.510084 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.510098 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.510106 | orchestrator | 2025-07-12 14:04:45.510114 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-07-12 14:04:45.510128 | orchestrator | Saturday 12 July 2025 14:03:05 +0000 (0:00:06.502) 0:01:15.059 ********* 2025-07-12 14:04:45.510137 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.510145 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.510152 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.510160 | orchestrator | 2025-07-12 14:04:45.510168 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-07-12 14:04:45.510176 | orchestrator | Saturday 12 July 2025 14:03:09 +0000 (0:00:03.523) 0:01:18.582 ********* 2025-07-12 14:04:45.510184 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.510192 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.510200 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.510208 | orchestrator | 2025-07-12 14:04:45.510216 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-07-12 14:04:45.510224 | orchestrator | Saturday 12 July 2025 14:03:13 +0000 (0:00:04.373) 0:01:22.956 ********* 2025-07-12 14:04:45.510232 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.510239 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.510247 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.510255 | orchestrator | 2025-07-12 14:04:45.510263 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-07-12 14:04:45.510270 | orchestrator | Saturday 12 July 2025 14:03:20 +0000 (0:00:07.309) 0:01:30.265 ********* 2025-07-12 14:04:45.510278 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.510286 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.510314 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.510328 | orchestrator | 2025-07-12 14:04:45.510341 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-07-12 14:04:45.510353 | orchestrator | Saturday 12 July 2025 14:03:21 +0000 (0:00:00.296) 0:01:30.561 ********* 2025-07-12 14:04:45.510365 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 14:04:45.510379 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.510392 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 14:04:45.510405 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.510488 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 14:04:45.510500 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.510508 | orchestrator | 2025-07-12 14:04:45.510516 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-07-12 14:04:45.510524 | orchestrator | Saturday 12 July 2025 14:03:25 +0000 (0:00:04.944) 0:01:35.505 ********* 2025-07-12 14:04:45.510540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.510566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.510580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 14:04:45.510595 | orchestrator | 2025-07-12 14:04:45.510604 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 14:04:45.510611 | orchestrator | Saturday 12 July 2025 14:03:29 +0000 (0:00:03.762) 0:01:39.268 ********* 2025-07-12 14:04:45.510619 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:04:45.510627 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:04:45.510635 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:04:45.510643 | orchestrator | 2025-07-12 14:04:45.510650 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-07-12 14:04:45.510658 | orchestrator | Saturday 12 July 2025 14:03:30 +0000 (0:00:00.275) 0:01:39.544 ********* 2025-07-12 14:04:45.510666 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:04:45.510674 | orchestrator | 2025-07-12 14:04:45.510682 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-07-12 14:04:45.510689 | orchestrator | Saturday 12 July 2025 14:03:32 +0000 (0:00:02.066) 0:01:41.610 ********* 2025-07-12 14:04:45.510697 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:04:45.510705 | orchestrator | 2025-07-12 14:04:45.510713 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-07-12 14:04:45.510721 | orchestrator | Saturday 12 July 2025 14:03:34 +0000 (0:00:02.201) 0:01:43.812 ********* 2025-07-12 14:04:45.510729 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:04:45.510736 | orchestrator | 2025-07-12 14:04:45.510745 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-07-12 14:04:45.510752 | orchestrator | Saturday 12 July 2025 14:03:36 +0000 (0:00:02.116) 0:01:45.928 ********* 2025-07-12 14:04:45.510760 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:04:45.510768 | orchestrator | 2025-07-12 14:04:45.510776 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-07-12 14:04:45.510784 | orchestrator | Saturday 12 July 2025 14:04:07 +0000 (0:00:31.565) 0:02:17.494 ********* 2025-07-12 14:04:45.510792 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:04:45.510800 | orchestrator | 2025-07-12 14:04:45.510812 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 14:04:45.510820 | orchestrator | Saturday 12 July 2025 14:04:10 +0000 (0:00:02.526) 0:02:20.021 ********* 2025-07-12 14:04:45.510828 | orchestrator | 2025-07-12 14:04:45.510836 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 14:04:45.510844 | orchestrator | Saturday 12 July 2025 14:04:10 +0000 (0:00:00.064) 0:02:20.086 ********* 2025-07-12 14:04:45.510852 | orchestrator | 2025-07-12 14:04:45.510859 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 14:04:45.510867 | orchestrator | Saturday 12 July 2025 14:04:10 +0000 (0:00:00.063) 0:02:20.149 ********* 2025-07-12 14:04:45.510875 | orchestrator | 2025-07-12 14:04:45.510883 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-07-12 14:04:45.510891 | orchestrator | Saturday 12 July 2025 14:04:10 +0000 (0:00:00.065) 0:02:20.215 ********* 2025-07-12 14:04:45.510899 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:04:45.510907 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:04:45.510915 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:04:45.510928 | orchestrator | 2025-07-12 14:04:45.510936 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:04:45.510945 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 14:04:45.510954 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:04:45.510962 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:04:45.510970 | orchestrator | 2025-07-12 14:04:45.510978 | orchestrator | 2025-07-12 14:04:45.510986 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:04:45.510994 | orchestrator | Saturday 12 July 2025 14:04:44 +0000 (0:00:34.091) 0:02:54.306 ********* 2025-07-12 14:04:45.511002 | orchestrator | =============================================================================== 2025-07-12 14:04:45.511010 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.09s 2025-07-12 14:04:45.511018 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 31.57s 2025-07-12 14:04:45.511026 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 7.31s 2025-07-12 14:04:45.511034 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.18s 2025-07-12 14:04:45.511042 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.97s 2025-07-12 14:04:45.511049 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.50s 2025-07-12 14:04:45.511057 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.03s 2025-07-12 14:04:45.511065 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.94s 2025-07-12 14:04:45.511073 | orchestrator | glance : Copying over config.json files for services -------------------- 4.38s 2025-07-12 14:04:45.511081 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.37s 2025-07-12 14:04:45.511089 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.08s 2025-07-12 14:04:45.511096 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.93s 2025-07-12 14:04:45.511104 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.84s 2025-07-12 14:04:45.511115 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.81s 2025-07-12 14:04:45.511124 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.81s 2025-07-12 14:04:45.511133 | orchestrator | glance : Check glance containers ---------------------------------------- 3.76s 2025-07-12 14:04:45.511142 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.72s 2025-07-12 14:04:45.511151 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.66s 2025-07-12 14:04:45.511160 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.58s 2025-07-12 14:04:45.511168 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.52s 2025-07-12 14:04:45.511178 | orchestrator | 2025-07-12 14:04:45 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:45.511187 | orchestrator | 2025-07-12 14:04:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:48.557424 | orchestrator | 2025-07-12 14:04:48 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:04:48.559652 | orchestrator | 2025-07-12 14:04:48 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:48.561835 | orchestrator | 2025-07-12 14:04:48 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:48.563479 | orchestrator | 2025-07-12 14:04:48 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:48.563536 | orchestrator | 2025-07-12 14:04:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:51.601401 | orchestrator | 2025-07-12 14:04:51 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:04:51.604804 | orchestrator | 2025-07-12 14:04:51 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:51.606760 | orchestrator | 2025-07-12 14:04:51 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:51.608765 | orchestrator | 2025-07-12 14:04:51 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:51.608788 | orchestrator | 2025-07-12 14:04:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:54.651183 | orchestrator | 2025-07-12 14:04:54 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:04:54.652835 | orchestrator | 2025-07-12 14:04:54 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:54.654501 | orchestrator | 2025-07-12 14:04:54 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:54.655771 | orchestrator | 2025-07-12 14:04:54 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:54.655799 | orchestrator | 2025-07-12 14:04:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:04:57.698363 | orchestrator | 2025-07-12 14:04:57 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:04:57.700177 | orchestrator | 2025-07-12 14:04:57 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:04:57.701927 | orchestrator | 2025-07-12 14:04:57 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:04:57.704457 | orchestrator | 2025-07-12 14:04:57 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:04:57.704509 | orchestrator | 2025-07-12 14:04:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:00.755982 | orchestrator | 2025-07-12 14:05:00 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:00.757470 | orchestrator | 2025-07-12 14:05:00 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:00.758675 | orchestrator | 2025-07-12 14:05:00 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:05:00.760580 | orchestrator | 2025-07-12 14:05:00 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:00.760893 | orchestrator | 2025-07-12 14:05:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:03.796525 | orchestrator | 2025-07-12 14:05:03 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:03.797340 | orchestrator | 2025-07-12 14:05:03 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:03.798846 | orchestrator | 2025-07-12 14:05:03 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:05:03.799671 | orchestrator | 2025-07-12 14:05:03 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:03.799698 | orchestrator | 2025-07-12 14:05:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:06.838384 | orchestrator | 2025-07-12 14:05:06 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:06.840087 | orchestrator | 2025-07-12 14:05:06 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:06.842216 | orchestrator | 2025-07-12 14:05:06 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:05:06.843763 | orchestrator | 2025-07-12 14:05:06 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:06.843789 | orchestrator | 2025-07-12 14:05:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:09.895611 | orchestrator | 2025-07-12 14:05:09 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:09.900973 | orchestrator | 2025-07-12 14:05:09 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:09.901009 | orchestrator | 2025-07-12 14:05:09 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:05:09.901980 | orchestrator | 2025-07-12 14:05:09 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:09.902364 | orchestrator | 2025-07-12 14:05:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:12.946260 | orchestrator | 2025-07-12 14:05:12 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:12.947651 | orchestrator | 2025-07-12 14:05:12 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:12.949130 | orchestrator | 2025-07-12 14:05:12 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:05:12.951009 | orchestrator | 2025-07-12 14:05:12 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:12.951143 | orchestrator | 2025-07-12 14:05:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:15.992953 | orchestrator | 2025-07-12 14:05:15 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:15.994371 | orchestrator | 2025-07-12 14:05:15 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:15.996166 | orchestrator | 2025-07-12 14:05:15 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state STARTED 2025-07-12 14:05:15.998255 | orchestrator | 2025-07-12 14:05:15 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:15.998355 | orchestrator | 2025-07-12 14:05:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:19.058373 | orchestrator | 2025-07-12 14:05:19 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:19.060680 | orchestrator | 2025-07-12 14:05:19 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:19.062730 | orchestrator | 2025-07-12 14:05:19 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:19.068107 | orchestrator | 2025-07-12 14:05:19.068754 | orchestrator | 2025-07-12 14:05:19.068782 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:05:19.068794 | orchestrator | 2025-07-12 14:05:19.068806 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:05:19.068817 | orchestrator | Saturday 12 July 2025 14:02:04 +0000 (0:00:00.203) 0:00:00.203 ********* 2025-07-12 14:05:19.068829 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:05:19.068842 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:05:19.068853 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:05:19.068997 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:05:19.069012 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:05:19.069024 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:05:19.069035 | orchestrator | 2025-07-12 14:05:19.069047 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:05:19.069059 | orchestrator | Saturday 12 July 2025 14:02:05 +0000 (0:00:00.525) 0:00:00.728 ********* 2025-07-12 14:05:19.069071 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-07-12 14:05:19.069110 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-07-12 14:05:19.069122 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-07-12 14:05:19.069133 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-07-12 14:05:19.069145 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-07-12 14:05:19.069156 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-07-12 14:05:19.069168 | orchestrator | 2025-07-12 14:05:19.069179 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-07-12 14:05:19.069191 | orchestrator | 2025-07-12 14:05:19.069203 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:19.069215 | orchestrator | Saturday 12 July 2025 14:02:05 +0000 (0:00:00.510) 0:00:01.239 ********* 2025-07-12 14:05:19.069240 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:05:19.069254 | orchestrator | 2025-07-12 14:05:19.069265 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-07-12 14:05:19.069607 | orchestrator | Saturday 12 July 2025 14:02:06 +0000 (0:00:01.051) 0:00:02.290 ********* 2025-07-12 14:05:19.069625 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-07-12 14:05:19.069638 | orchestrator | 2025-07-12 14:05:19.069650 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-07-12 14:05:19.069664 | orchestrator | Saturday 12 July 2025 14:02:10 +0000 (0:00:03.344) 0:00:05.635 ********* 2025-07-12 14:05:19.069677 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-07-12 14:05:19.069691 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-07-12 14:05:19.069702 | orchestrator | 2025-07-12 14:05:19.069713 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-07-12 14:05:19.069724 | orchestrator | Saturday 12 July 2025 14:02:16 +0000 (0:00:06.565) 0:00:12.200 ********* 2025-07-12 14:05:19.069735 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:05:19.069746 | orchestrator | 2025-07-12 14:05:19.069756 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-07-12 14:05:19.069767 | orchestrator | Saturday 12 July 2025 14:02:20 +0000 (0:00:03.325) 0:00:15.525 ********* 2025-07-12 14:05:19.069778 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:05:19.069789 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-07-12 14:05:19.069800 | orchestrator | 2025-07-12 14:05:19.069811 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-07-12 14:05:19.069822 | orchestrator | Saturday 12 July 2025 14:02:23 +0000 (0:00:03.761) 0:00:19.287 ********* 2025-07-12 14:05:19.069832 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:05:19.069843 | orchestrator | 2025-07-12 14:05:19.069854 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-07-12 14:05:19.069865 | orchestrator | Saturday 12 July 2025 14:02:27 +0000 (0:00:03.354) 0:00:22.642 ********* 2025-07-12 14:05:19.069876 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-07-12 14:05:19.069887 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-07-12 14:05:19.069897 | orchestrator | 2025-07-12 14:05:19.069908 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-07-12 14:05:19.069919 | orchestrator | Saturday 12 July 2025 14:02:34 +0000 (0:00:07.772) 0:00:30.414 ********* 2025-07-12 14:05:19.069933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.070108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.070140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.070155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.070167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.070180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.070232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.070246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.070264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.070276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.070336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.070365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.070377 | orchestrator | 2025-07-12 14:05:19.070418 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:19.070432 | orchestrator | Saturday 12 July 2025 14:02:39 +0000 (0:00:04.740) 0:00:35.155 ********* 2025-07-12 14:05:19.070443 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.070454 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:19.070465 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:19.070476 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:19.070487 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:19.070497 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:19.070508 | orchestrator | 2025-07-12 14:05:19.070519 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:19.070531 | orchestrator | Saturday 12 July 2025 14:02:41 +0000 (0:00:01.321) 0:00:36.476 ********* 2025-07-12 14:05:19.070541 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.070552 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:19.070563 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:19.070574 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:05:19.070585 | orchestrator | 2025-07-12 14:05:19.070596 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-07-12 14:05:19.070607 | orchestrator | Saturday 12 July 2025 14:02:42 +0000 (0:00:01.020) 0:00:37.497 ********* 2025-07-12 14:05:19.070618 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-07-12 14:05:19.070629 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-07-12 14:05:19.070640 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-07-12 14:05:19.070651 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-07-12 14:05:19.070662 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-07-12 14:05:19.070672 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-07-12 14:05:19.070683 | orchestrator | 2025-07-12 14:05:19.070700 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-07-12 14:05:19.070711 | orchestrator | Saturday 12 July 2025 14:02:44 +0000 (0:00:02.663) 0:00:40.160 ********* 2025-07-12 14:05:19.070723 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:19.070744 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:19.070756 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:19.070796 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:19.070814 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:19.070826 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 14:05:19.070839 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:19.070858 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:19.070895 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:19.070914 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:19.070927 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:19.070947 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 14:05:19.070959 | orchestrator | 2025-07-12 14:05:19.070970 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-07-12 14:05:19.070981 | orchestrator | Saturday 12 July 2025 14:02:47 +0000 (0:00:03.069) 0:00:43.229 ********* 2025-07-12 14:05:19.070992 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:19.071004 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:19.071015 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 14:05:19.071026 | orchestrator | 2025-07-12 14:05:19.071037 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-07-12 14:05:19.071048 | orchestrator | Saturday 12 July 2025 14:02:49 +0000 (0:00:01.647) 0:00:44.876 ********* 2025-07-12 14:05:19.071059 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-07-12 14:05:19.071070 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-07-12 14:05:19.071081 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-07-12 14:05:19.071092 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 14:05:19.071103 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 14:05:19.071138 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 14:05:19.071150 | orchestrator | 2025-07-12 14:05:19.071161 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-07-12 14:05:19.071172 | orchestrator | Saturday 12 July 2025 14:02:52 +0000 (0:00:03.038) 0:00:47.914 ********* 2025-07-12 14:05:19.071183 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-07-12 14:05:19.071194 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-07-12 14:05:19.071205 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-07-12 14:05:19.071216 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-07-12 14:05:19.071227 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-07-12 14:05:19.071237 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-07-12 14:05:19.071248 | orchestrator | 2025-07-12 14:05:19.071259 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-07-12 14:05:19.071270 | orchestrator | Saturday 12 July 2025 14:02:53 +0000 (0:00:00.938) 0:00:48.853 ********* 2025-07-12 14:05:19.071281 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.071434 | orchestrator | 2025-07-12 14:05:19.071448 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-07-12 14:05:19.071459 | orchestrator | Saturday 12 July 2025 14:02:53 +0000 (0:00:00.122) 0:00:48.975 ********* 2025-07-12 14:05:19.071470 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.071481 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:19.071492 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:19.071511 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:19.071522 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:19.071533 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:19.071544 | orchestrator | 2025-07-12 14:05:19.071555 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:19.071566 | orchestrator | Saturday 12 July 2025 14:02:54 +0000 (0:00:00.868) 0:00:49.843 ********* 2025-07-12 14:05:19.071584 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:05:19.071596 | orchestrator | 2025-07-12 14:05:19.071606 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-07-12 14:05:19.071615 | orchestrator | Saturday 12 July 2025 14:02:55 +0000 (0:00:01.504) 0:00:51.348 ********* 2025-07-12 14:05:19.071626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.071637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.071679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.071692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.071713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.071724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.071735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.071745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.071784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.071797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.071817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.071828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.071907 | orchestrator | 2025-07-12 14:05:19.071918 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-07-12 14:05:19.071928 | orchestrator | Saturday 12 July 2025 14:02:59 +0000 (0:00:03.342) 0:00:54.691 ********* 2025-07-12 14:05:19.071939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:19.071974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.071987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:19.072010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:19.072031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072041 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:19.072051 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.072061 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:19.072072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072120 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:19.072145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072179 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:19.072196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072230 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:19.072246 | orchestrator | 2025-07-12 14:05:19.072264 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-07-12 14:05:19.072308 | orchestrator | Saturday 12 July 2025 14:03:02 +0000 (0:00:03.291) 0:00:57.982 ********* 2025-07-12 14:05:19.072334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:19.072345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072360 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.072371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:19.072381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072391 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:19.072401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:19.072424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072435 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:19.072445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072471 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:19.072481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072507 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:19.072524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.072545 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:19.072554 | orchestrator | 2025-07-12 14:05:19.072564 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-07-12 14:05:19.072574 | orchestrator | Saturday 12 July 2025 14:03:04 +0000 (0:00:02.344) 0:01:00.327 ********* 2025-07-12 14:05:19.072589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.072600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.072610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.072634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.072650 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.072661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.072671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.072681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.072697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.072713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.072724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.072739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.072749 | orchestrator | 2025-07-12 14:05:19.072759 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-07-12 14:05:19.072769 | orchestrator | Saturday 12 July 2025 14:03:07 +0000 (0:00:02.924) 0:01:03.251 ********* 2025-07-12 14:05:19.072779 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 14:05:19.072789 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:19.072801 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 14:05:19.072817 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:19.072839 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 14:05:19.072860 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:19.072886 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 14:05:19.072902 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 14:05:19.072917 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)[0m 2025-07-12 14:05:19.072933 | orchestrator | 2025-07-12 14:05:19.072948 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-07-12 14:05:19.072964 | orchestrator | Saturday 12 July 2025 14:03:10 +0000 (0:00:02.301) 0:01:05.553 ********* 2025-07-12 14:05:19.072979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 2025-07-12 14:05:19 | INFO  | Task 7c158e8c-e962-46a7-b902-4df9c482fb4c is in state SUCCESS 2025-07-12 14:05:19.073025 | orchestrator | 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.073050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.073096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.073113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073154 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073203 | orchestrator | 2025-07-12 14:05:19.073219 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-07-12 14:05:19.073343 | orchestrator | Saturday 12 July 2025 14:03:20 +0000 (0:00:10.863) 0:01:16.416 ********* 2025-07-12 14:05:19.073360 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.073376 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:19.073392 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:19.073409 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:05:19.073426 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:05:19.073442 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:05:19.073458 | orchestrator | 2025-07-12 14:05:19.073475 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-07-12 14:05:19.073491 | orchestrator | Saturday 12 July 2025 14:03:24 +0000 (0:00:03.139) 0:01:19.556 ********* 2025-07-12 14:05:19.073517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:19.073529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.073556 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.073583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:19.073600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.073617 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:19.073644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 14:05:19.073662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.073680 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:19.073704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.073731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.073742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.073752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.073762 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:19.073778 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:19.073789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.073804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 14:05:19.073822 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:19.073832 | orchestrator | 2025-07-12 14:05:19.073842 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-07-12 14:05:19.073852 | orchestrator | Saturday 12 July 2025 14:03:25 +0000 (0:00:01.619) 0:01:21.175 ********* 2025-07-12 14:05:19.073862 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.073872 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:19.073882 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:19.073891 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:19.073901 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:19.073911 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:19.073921 | orchestrator | 2025-07-12 14:05:19.073931 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-07-12 14:05:19.073941 | orchestrator | Saturday 12 July 2025 14:03:26 +0000 (0:00:00.703) 0:01:21.879 ********* 2025-07-12 14:05:19.073951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.073961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.073979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.074000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 14:05:19.074011 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.074110 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.074121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.074141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.074152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.074175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.074186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.074196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 14:05:19.074206 | orchestrator | 2025-07-12 14:05:19.074216 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 14:05:19.074227 | orchestrator | Saturday 12 July 2025 14:03:28 +0000 (0:00:02.576) 0:01:24.456 ********* 2025-07-12 14:05:19.074244 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.074260 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:05:19.074355 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:05:19.074374 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:05:19.074390 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:05:19.074405 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:05:19.074419 | orchestrator | 2025-07-12 14:05:19.074435 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-07-12 14:05:19.074450 | orchestrator | Saturday 12 July 2025 14:03:29 +0000 (0:00:00.649) 0:01:25.106 ********* 2025-07-12 14:05:19.074465 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:19.074480 | orchestrator | 2025-07-12 14:05:19.074496 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-07-12 14:05:19.074511 | orchestrator | Saturday 12 July 2025 14:03:31 +0000 (0:00:02.159) 0:01:27.266 ********* 2025-07-12 14:05:19.074526 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:19.074541 | orchestrator | 2025-07-12 14:05:19.074557 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-07-12 14:05:19.074574 | orchestrator | Saturday 12 July 2025 14:03:33 +0000 (0:00:02.143) 0:01:29.409 ********* 2025-07-12 14:05:19.074603 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:19.074621 | orchestrator | 2025-07-12 14:05:19.074648 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:19.074662 | orchestrator | Saturday 12 July 2025 14:03:56 +0000 (0:00:22.999) 0:01:52.409 ********* 2025-07-12 14:05:19.074670 | orchestrator | 2025-07-12 14:05:19.074678 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:19.074686 | orchestrator | Saturday 12 July 2025 14:03:56 +0000 (0:00:00.064) 0:01:52.473 ********* 2025-07-12 14:05:19.074694 | orchestrator | 2025-07-12 14:05:19.074702 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:19.074710 | orchestrator | Saturday 12 July 2025 14:03:57 +0000 (0:00:00.070) 0:01:52.544 ********* 2025-07-12 14:05:19.074719 | orchestrator | 2025-07-12 14:05:19.074727 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:19.074735 | orchestrator | Saturday 12 July 2025 14:03:57 +0000 (0:00:00.062) 0:01:52.607 ********* 2025-07-12 14:05:19.074743 | orchestrator | 2025-07-12 14:05:19.074751 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:19.074759 | orchestrator | Saturday 12 July 2025 14:03:57 +0000 (0:00:00.065) 0:01:52.673 ********* 2025-07-12 14:05:19.074767 | orchestrator | 2025-07-12 14:05:19.074775 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 14:05:19.074783 | orchestrator | Saturday 12 July 2025 14:03:57 +0000 (0:00:00.060) 0:01:52.733 ********* 2025-07-12 14:05:19.074791 | orchestrator | 2025-07-12 14:05:19.074799 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-07-12 14:05:19.074807 | orchestrator | Saturday 12 July 2025 14:03:57 +0000 (0:00:00.069) 0:01:52.802 ********* 2025-07-12 14:05:19.074815 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:19.074823 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:05:19.074831 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:05:19.074839 | orchestrator | 2025-07-12 14:05:19.074847 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-07-12 14:05:19.074860 | orchestrator | Saturday 12 July 2025 14:04:24 +0000 (0:00:27.140) 0:02:19.943 ********* 2025-07-12 14:05:19.074868 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:05:19.074876 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:05:19.074884 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:05:19.074892 | orchestrator | 2025-07-12 14:05:19.074900 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-07-12 14:05:19.074909 | orchestrator | Saturday 12 July 2025 14:04:30 +0000 (0:00:06.316) 0:02:26.260 ********* 2025-07-12 14:05:19.074917 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:05:19.074925 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:05:19.074933 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:05:19.074941 | orchestrator | 2025-07-12 14:05:19.074949 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-07-12 14:05:19.074957 | orchestrator | Saturday 12 July 2025 14:05:09 +0000 (0:00:38.431) 0:03:04.691 ********* 2025-07-12 14:05:19.074965 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:05:19.074973 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:05:19.074981 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:05:19.074989 | orchestrator | 2025-07-12 14:05:19.074998 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-07-12 14:05:19.075006 | orchestrator | Saturday 12 July 2025 14:05:15 +0000 (0:00:05.922) 0:03:10.613 ********* 2025-07-12 14:05:19.075014 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:05:19.075022 | orchestrator | 2025-07-12 14:05:19.075030 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:05:19.075038 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 14:05:19.075048 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:05:19.075062 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:05:19.075070 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 14:05:19.075078 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 14:05:19.075086 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 14:05:19.075094 | orchestrator | 2025-07-12 14:05:19.075102 | orchestrator | 2025-07-12 14:05:19.075110 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:05:19.075118 | orchestrator | Saturday 12 July 2025 14:05:15 +0000 (0:00:00.616) 0:03:11.230 ********* 2025-07-12 14:05:19.075126 | orchestrator | =============================================================================== 2025-07-12 14:05:19.075133 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 38.43s 2025-07-12 14:05:19.075141 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.14s 2025-07-12 14:05:19.075149 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 23.00s 2025-07-12 14:05:19.075157 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.86s 2025-07-12 14:05:19.075165 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.77s 2025-07-12 14:05:19.075173 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.57s 2025-07-12 14:05:19.075181 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.32s 2025-07-12 14:05:19.075194 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.92s 2025-07-12 14:05:19.075202 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.74s 2025-07-12 14:05:19.075210 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.76s 2025-07-12 14:05:19.075218 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.35s 2025-07-12 14:05:19.075226 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.34s 2025-07-12 14:05:19.075234 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.34s 2025-07-12 14:05:19.075242 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.33s 2025-07-12 14:05:19.075250 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS certificate --- 3.29s 2025-07-12 14:05:19.075258 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.14s 2025-07-12 14:05:19.075266 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.07s 2025-07-12 14:05:19.075273 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.04s 2025-07-12 14:05:19.075281 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.92s 2025-07-12 14:05:19.075310 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.66s 2025-07-12 14:05:19.075319 | orchestrator | 2025-07-12 14:05:19 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:19.075327 | orchestrator | 2025-07-12 14:05:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:22.123556 | orchestrator | 2025-07-12 14:05:22 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:22.125725 | orchestrator | 2025-07-12 14:05:22 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:22.127121 | orchestrator | 2025-07-12 14:05:22 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:22.129270 | orchestrator | 2025-07-12 14:05:22 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:22.129333 | orchestrator | 2025-07-12 14:05:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:25.179449 | orchestrator | 2025-07-12 14:05:25 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:25.180584 | orchestrator | 2025-07-12 14:05:25 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:25.182163 | orchestrator | 2025-07-12 14:05:25 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:25.183360 | orchestrator | 2025-07-12 14:05:25 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:25.183677 | orchestrator | 2025-07-12 14:05:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:28.229354 | orchestrator | 2025-07-12 14:05:28 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:28.231812 | orchestrator | 2025-07-12 14:05:28 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:28.234361 | orchestrator | 2025-07-12 14:05:28 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:28.236594 | orchestrator | 2025-07-12 14:05:28 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:28.236869 | orchestrator | 2025-07-12 14:05:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:31.281367 | orchestrator | 2025-07-12 14:05:31 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:31.283663 | orchestrator | 2025-07-12 14:05:31 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:31.283694 | orchestrator | 2025-07-12 14:05:31 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:31.284200 | orchestrator | 2025-07-12 14:05:31 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:31.284221 | orchestrator | 2025-07-12 14:05:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:34.335596 | orchestrator | 2025-07-12 14:05:34 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:34.336826 | orchestrator | 2025-07-12 14:05:34 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:34.341088 | orchestrator | 2025-07-12 14:05:34 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:34.343516 | orchestrator | 2025-07-12 14:05:34 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:34.343773 | orchestrator | 2025-07-12 14:05:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:37.396210 | orchestrator | 2025-07-12 14:05:37 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:37.397353 | orchestrator | 2025-07-12 14:05:37 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:37.399073 | orchestrator | 2025-07-12 14:05:37 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:37.400754 | orchestrator | 2025-07-12 14:05:37 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:37.400782 | orchestrator | 2025-07-12 14:05:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:40.440044 | orchestrator | 2025-07-12 14:05:40 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state STARTED 2025-07-12 14:05:40.440578 | orchestrator | 2025-07-12 14:05:40 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:40.441207 | orchestrator | 2025-07-12 14:05:40 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:40.442119 | orchestrator | 2025-07-12 14:05:40 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:40.442148 | orchestrator | 2025-07-12 14:05:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:43.493851 | orchestrator | 2025-07-12 14:05:43.493940 | orchestrator | 2025-07-12 14:05:43.493954 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:05:43.493966 | orchestrator | 2025-07-12 14:05:43.493994 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:05:43.494006 | orchestrator | Saturday 12 July 2025 14:04:49 +0000 (0:00:00.258) 0:00:00.258 ********* 2025-07-12 14:05:43.494071 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:05:43.494085 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:05:43.494096 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:05:43.494108 | orchestrator | 2025-07-12 14:05:43.494119 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:05:43.494131 | orchestrator | Saturday 12 July 2025 14:04:49 +0000 (0:00:00.289) 0:00:00.548 ********* 2025-07-12 14:05:43.494142 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-07-12 14:05:43.494153 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-07-12 14:05:43.494164 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-07-12 14:05:43.494174 | orchestrator | 2025-07-12 14:05:43.494185 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-07-12 14:05:43.494196 | orchestrator | 2025-07-12 14:05:43.494207 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 14:05:43.494218 | orchestrator | Saturday 12 July 2025 14:04:49 +0000 (0:00:00.440) 0:00:00.988 ********* 2025-07-12 14:05:43.494229 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:05:43.494241 | orchestrator | 2025-07-12 14:05:43.494252 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-07-12 14:05:43.494263 | orchestrator | Saturday 12 July 2025 14:04:50 +0000 (0:00:00.534) 0:00:01.522 ********* 2025-07-12 14:05:43.494274 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-07-12 14:05:43.494350 | orchestrator | 2025-07-12 14:05:43.494362 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-07-12 14:05:43.494372 | orchestrator | Saturday 12 July 2025 14:04:53 +0000 (0:00:03.563) 0:00:05.086 ********* 2025-07-12 14:05:43.494383 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-07-12 14:05:43.494394 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-07-12 14:05:43.494405 | orchestrator | 2025-07-12 14:05:43.494415 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-07-12 14:05:43.494426 | orchestrator | Saturday 12 July 2025 14:05:00 +0000 (0:00:06.190) 0:00:11.276 ********* 2025-07-12 14:05:43.494437 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:05:43.494448 | orchestrator | 2025-07-12 14:05:43.494459 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-07-12 14:05:43.494470 | orchestrator | Saturday 12 July 2025 14:05:03 +0000 (0:00:03.526) 0:00:14.802 ********* 2025-07-12 14:05:43.494480 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:05:43.494491 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 14:05:43.494502 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 14:05:43.494514 | orchestrator | 2025-07-12 14:05:43.494524 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-07-12 14:05:43.494557 | orchestrator | Saturday 12 July 2025 14:05:11 +0000 (0:00:07.770) 0:00:22.573 ********* 2025-07-12 14:05:43.494568 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:05:43.494579 | orchestrator | 2025-07-12 14:05:43.494589 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-07-12 14:05:43.494600 | orchestrator | Saturday 12 July 2025 14:05:14 +0000 (0:00:03.549) 0:00:26.123 ********* 2025-07-12 14:05:43.494611 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 14:05:43.494622 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 14:05:43.494632 | orchestrator | 2025-07-12 14:05:43.494643 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-07-12 14:05:43.494654 | orchestrator | Saturday 12 July 2025 14:05:22 +0000 (0:00:07.759) 0:00:33.882 ********* 2025-07-12 14:05:43.494664 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-07-12 14:05:43.494675 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-07-12 14:05:43.494686 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-07-12 14:05:43.494696 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-07-12 14:05:43.494707 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-07-12 14:05:43.494718 | orchestrator | 2025-07-12 14:05:43.494728 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 14:05:43.494739 | orchestrator | Saturday 12 July 2025 14:05:38 +0000 (0:00:15.821) 0:00:49.704 ********* 2025-07-12 14:05:43.494750 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:05:43.494761 | orchestrator | 2025-07-12 14:05:43.494772 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-07-12 14:05:43.494782 | orchestrator | Saturday 12 July 2025 14:05:39 +0000 (0:00:00.590) 0:00:50.295 ********* 2025-07-12 14:05:43.494794 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-07-12 14:05:43.494843 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1752329140.6918693-6672-212925779910024/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1752329140.6918693-6672-212925779910024/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1752329140.6918693-6672-212925779910024/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_ld59_an2/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_ld59_an2/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_ld59_an2/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_ld59_an2/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-07-12 14:05:43.494868 | orchestrator | 2025-07-12 14:05:43.494880 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:05:43.494891 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-12 14:05:43.494904 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:05:43.494915 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:05:43.494926 | orchestrator | 2025-07-12 14:05:43.494937 | orchestrator | 2025-07-12 14:05:43.494948 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:05:43.494959 | orchestrator | Saturday 12 July 2025 14:05:42 +0000 (0:00:03.698) 0:00:53.993 ********* 2025-07-12 14:05:43.494977 | orchestrator | =============================================================================== 2025-07-12 14:05:43.494988 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.82s 2025-07-12 14:05:43.495004 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.77s 2025-07-12 14:05:43.495015 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.76s 2025-07-12 14:05:43.495026 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.19s 2025-07-12 14:05:43.495037 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.70s 2025-07-12 14:05:43.495048 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.56s 2025-07-12 14:05:43.495059 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.55s 2025-07-12 14:05:43.495070 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.53s 2025-07-12 14:05:43.495080 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.59s 2025-07-12 14:05:43.495091 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.53s 2025-07-12 14:05:43.495102 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-07-12 14:05:43.495113 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-07-12 14:05:43.495130 | orchestrator | 2025-07-12 14:05:43 | INFO  | Task ec4c7347-a7a9-4417-ae81-fa7ed1915d84 is in state SUCCESS 2025-07-12 14:05:43.495715 | orchestrator | 2025-07-12 14:05:43 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:43.497320 | orchestrator | 2025-07-12 14:05:43 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:43.498709 | orchestrator | 2025-07-12 14:05:43 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:43.498733 | orchestrator | 2025-07-12 14:05:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:46.542768 | orchestrator | 2025-07-12 14:05:46 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:46.544374 | orchestrator | 2025-07-12 14:05:46 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:46.545666 | orchestrator | 2025-07-12 14:05:46 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:46.545697 | orchestrator | 2025-07-12 14:05:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:49.584539 | orchestrator | 2025-07-12 14:05:49 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:49.586341 | orchestrator | 2025-07-12 14:05:49 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:49.588908 | orchestrator | 2025-07-12 14:05:49 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:49.588998 | orchestrator | 2025-07-12 14:05:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:52.634956 | orchestrator | 2025-07-12 14:05:52 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:52.636990 | orchestrator | 2025-07-12 14:05:52 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:52.638600 | orchestrator | 2025-07-12 14:05:52 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:52.638633 | orchestrator | 2025-07-12 14:05:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:55.684669 | orchestrator | 2025-07-12 14:05:55 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:55.685682 | orchestrator | 2025-07-12 14:05:55 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:55.688985 | orchestrator | 2025-07-12 14:05:55 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:55.689081 | orchestrator | 2025-07-12 14:05:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:05:58.730857 | orchestrator | 2025-07-12 14:05:58 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:05:58.732412 | orchestrator | 2025-07-12 14:05:58 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:05:58.736897 | orchestrator | 2025-07-12 14:05:58 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:05:58.736923 | orchestrator | 2025-07-12 14:05:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:01.771810 | orchestrator | 2025-07-12 14:06:01 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:01.773719 | orchestrator | 2025-07-12 14:06:01 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:01.775094 | orchestrator | 2025-07-12 14:06:01 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:01.775358 | orchestrator | 2025-07-12 14:06:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:04.823178 | orchestrator | 2025-07-12 14:06:04 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:04.826712 | orchestrator | 2025-07-12 14:06:04 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:04.829905 | orchestrator | 2025-07-12 14:06:04 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:04.830113 | orchestrator | 2025-07-12 14:06:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:07.878774 | orchestrator | 2025-07-12 14:06:07 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:07.880994 | orchestrator | 2025-07-12 14:06:07 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:07.883625 | orchestrator | 2025-07-12 14:06:07 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:07.883716 | orchestrator | 2025-07-12 14:06:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:10.940670 | orchestrator | 2025-07-12 14:06:10 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:10.942556 | orchestrator | 2025-07-12 14:06:10 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:10.943866 | orchestrator | 2025-07-12 14:06:10 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:10.943889 | orchestrator | 2025-07-12 14:06:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:13.985420 | orchestrator | 2025-07-12 14:06:13 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:13.985996 | orchestrator | 2025-07-12 14:06:13 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:13.987145 | orchestrator | 2025-07-12 14:06:13 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:13.987190 | orchestrator | 2025-07-12 14:06:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:17.028938 | orchestrator | 2025-07-12 14:06:17 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:17.030471 | orchestrator | 2025-07-12 14:06:17 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:17.031871 | orchestrator | 2025-07-12 14:06:17 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:17.031967 | orchestrator | 2025-07-12 14:06:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:20.077854 | orchestrator | 2025-07-12 14:06:20 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:20.079302 | orchestrator | 2025-07-12 14:06:20 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:20.081528 | orchestrator | 2025-07-12 14:06:20 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:20.081552 | orchestrator | 2025-07-12 14:06:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:23.121112 | orchestrator | 2025-07-12 14:06:23 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:23.121390 | orchestrator | 2025-07-12 14:06:23 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:23.122122 | orchestrator | 2025-07-12 14:06:23 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:23.122146 | orchestrator | 2025-07-12 14:06:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:26.166819 | orchestrator | 2025-07-12 14:06:26 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:26.167520 | orchestrator | 2025-07-12 14:06:26 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:26.168846 | orchestrator | 2025-07-12 14:06:26 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:26.168871 | orchestrator | 2025-07-12 14:06:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:29.217701 | orchestrator | 2025-07-12 14:06:29 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:29.220133 | orchestrator | 2025-07-12 14:06:29 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:29.222712 | orchestrator | 2025-07-12 14:06:29 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:29.222855 | orchestrator | 2025-07-12 14:06:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:32.275540 | orchestrator | 2025-07-12 14:06:32 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:32.275643 | orchestrator | 2025-07-12 14:06:32 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:32.276056 | orchestrator | 2025-07-12 14:06:32 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:32.276429 | orchestrator | 2025-07-12 14:06:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:35.322999 | orchestrator | 2025-07-12 14:06:35 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:35.324418 | orchestrator | 2025-07-12 14:06:35 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:35.328657 | orchestrator | 2025-07-12 14:06:35 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:35.329251 | orchestrator | 2025-07-12 14:06:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:38.386001 | orchestrator | 2025-07-12 14:06:38 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:38.386735 | orchestrator | 2025-07-12 14:06:38 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:38.387562 | orchestrator | 2025-07-12 14:06:38 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:38.387587 | orchestrator | 2025-07-12 14:06:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:41.440429 | orchestrator | 2025-07-12 14:06:41 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:41.442138 | orchestrator | 2025-07-12 14:06:41 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:41.444789 | orchestrator | 2025-07-12 14:06:41 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:41.444813 | orchestrator | 2025-07-12 14:06:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:44.493550 | orchestrator | 2025-07-12 14:06:44 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:44.495341 | orchestrator | 2025-07-12 14:06:44 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:44.497973 | orchestrator | 2025-07-12 14:06:44 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:44.498130 | orchestrator | 2025-07-12 14:06:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:47.545676 | orchestrator | 2025-07-12 14:06:47 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:47.546669 | orchestrator | 2025-07-12 14:06:47 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:47.548464 | orchestrator | 2025-07-12 14:06:47 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:47.548490 | orchestrator | 2025-07-12 14:06:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:50.584834 | orchestrator | 2025-07-12 14:06:50 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:50.586245 | orchestrator | 2025-07-12 14:06:50 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:50.587085 | orchestrator | 2025-07-12 14:06:50 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:50.587110 | orchestrator | 2025-07-12 14:06:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:53.632115 | orchestrator | 2025-07-12 14:06:53 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:53.633440 | orchestrator | 2025-07-12 14:06:53 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:53.635488 | orchestrator | 2025-07-12 14:06:53 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:53.635514 | orchestrator | 2025-07-12 14:06:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:56.684197 | orchestrator | 2025-07-12 14:06:56 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:56.685233 | orchestrator | 2025-07-12 14:06:56 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:56.686722 | orchestrator | 2025-07-12 14:06:56 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:56.686770 | orchestrator | 2025-07-12 14:06:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:06:59.729647 | orchestrator | 2025-07-12 14:06:59 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:06:59.734493 | orchestrator | 2025-07-12 14:06:59 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:06:59.734536 | orchestrator | 2025-07-12 14:06:59 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:06:59.734550 | orchestrator | 2025-07-12 14:06:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:02.778168 | orchestrator | 2025-07-12 14:07:02 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:02.779644 | orchestrator | 2025-07-12 14:07:02 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:07:02.782290 | orchestrator | 2025-07-12 14:07:02 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:02.782311 | orchestrator | 2025-07-12 14:07:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:05.824609 | orchestrator | 2025-07-12 14:07:05 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:05.827176 | orchestrator | 2025-07-12 14:07:05 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:07:05.829172 | orchestrator | 2025-07-12 14:07:05 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:05.829202 | orchestrator | 2025-07-12 14:07:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:08.884396 | orchestrator | 2025-07-12 14:07:08 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:08.886344 | orchestrator | 2025-07-12 14:07:08 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:07:08.889201 | orchestrator | 2025-07-12 14:07:08 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:08.889244 | orchestrator | 2025-07-12 14:07:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:11.940166 | orchestrator | 2025-07-12 14:07:11 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:11.942169 | orchestrator | 2025-07-12 14:07:11 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state STARTED 2025-07-12 14:07:11.945518 | orchestrator | 2025-07-12 14:07:11 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:11.945679 | orchestrator | 2025-07-12 14:07:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:14.998746 | orchestrator | 2025-07-12 14:07:14 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:14.999737 | orchestrator | 2025-07-12 14:07:14 | INFO  | Task 9ed59247-8a51-459b-9eb3-e8936f2b419c is in state SUCCESS 2025-07-12 14:07:15.001748 | orchestrator | 2025-07-12 14:07:14 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:15.001774 | orchestrator | 2025-07-12 14:07:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:18.051508 | orchestrator | 2025-07-12 14:07:18 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:18.052377 | orchestrator | 2025-07-12 14:07:18 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:18.052415 | orchestrator | 2025-07-12 14:07:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:21.098931 | orchestrator | 2025-07-12 14:07:21 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:21.099703 | orchestrator | 2025-07-12 14:07:21 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:21.099738 | orchestrator | 2025-07-12 14:07:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:24.132093 | orchestrator | 2025-07-12 14:07:24 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:24.133266 | orchestrator | 2025-07-12 14:07:24 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:24.133304 | orchestrator | 2025-07-12 14:07:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:27.175074 | orchestrator | 2025-07-12 14:07:27 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:27.175172 | orchestrator | 2025-07-12 14:07:27 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:27.175206 | orchestrator | 2025-07-12 14:07:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:30.219729 | orchestrator | 2025-07-12 14:07:30 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:30.227924 | orchestrator | 2025-07-12 14:07:30 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:30.227982 | orchestrator | 2025-07-12 14:07:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:33.263771 | orchestrator | 2025-07-12 14:07:33 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state STARTED 2025-07-12 14:07:33.265780 | orchestrator | 2025-07-12 14:07:33 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:33.265817 | orchestrator | 2025-07-12 14:07:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:36.307185 | orchestrator | 2025-07-12 14:07:36 | INFO  | Task ba77aa61-8b77-4a97-a81a-96a273056584 is in state SUCCESS 2025-07-12 14:07:36.309365 | orchestrator | 2025-07-12 14:07:36.309907 | orchestrator | 2025-07-12 14:07:36.309939 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:07:36.309959 | orchestrator | 2025-07-12 14:07:36.309979 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:07:36.310424 | orchestrator | Saturday 12 July 2025 14:04:02 +0000 (0:00:00.171) 0:00:00.171 ********* 2025-07-12 14:07:36.310452 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.310465 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:07:36.310476 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:07:36.310487 | orchestrator | 2025-07-12 14:07:36.310499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:07:36.310510 | orchestrator | Saturday 12 July 2025 14:04:02 +0000 (0:00:00.289) 0:00:00.460 ********* 2025-07-12 14:07:36.310521 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 14:07:36.310532 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 14:07:36.310544 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 14:07:36.310554 | orchestrator | 2025-07-12 14:07:36.310565 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-07-12 14:07:36.310576 | orchestrator | 2025-07-12 14:07:36.310587 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-07-12 14:07:36.310598 | orchestrator | Saturday 12 July 2025 14:04:03 +0000 (0:00:00.605) 0:00:01.065 ********* 2025-07-12 14:07:36.310609 | orchestrator | 2025-07-12 14:07:36.310620 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-07-12 14:07:36.310631 | orchestrator | 2025-07-12 14:07:36.310642 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-07-12 14:07:36.310653 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.310664 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:07:36.310675 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:07:36.310686 | orchestrator | 2025-07-12 14:07:36.310697 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:07:36.310709 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:07:36.310722 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:07:36.310733 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:07:36.310744 | orchestrator | 2025-07-12 14:07:36.310755 | orchestrator | 2025-07-12 14:07:36.310766 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:07:36.310777 | orchestrator | Saturday 12 July 2025 14:07:14 +0000 (0:03:10.783) 0:03:11.849 ********* 2025-07-12 14:07:36.310788 | orchestrator | =============================================================================== 2025-07-12 14:07:36.310799 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 190.78s 2025-07-12 14:07:36.310810 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-07-12 14:07:36.310821 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-07-12 14:07:36.310832 | orchestrator | 2025-07-12 14:07:36.310843 | orchestrator | 2025-07-12 14:07:36.310854 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:07:36.310864 | orchestrator | 2025-07-12 14:07:36.310875 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:07:36.310887 | orchestrator | Saturday 12 July 2025 14:05:20 +0000 (0:00:00.267) 0:00:00.267 ********* 2025-07-12 14:07:36.310898 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.310909 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:07:36.310920 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:07:36.310930 | orchestrator | 2025-07-12 14:07:36.310969 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:07:36.310981 | orchestrator | Saturday 12 July 2025 14:05:20 +0000 (0:00:00.307) 0:00:00.574 ********* 2025-07-12 14:07:36.310992 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-07-12 14:07:36.311003 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-07-12 14:07:36.311014 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-07-12 14:07:36.311025 | orchestrator | 2025-07-12 14:07:36.311038 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-07-12 14:07:36.311051 | orchestrator | 2025-07-12 14:07:36.311064 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 14:07:36.311077 | orchestrator | Saturday 12 July 2025 14:05:20 +0000 (0:00:00.400) 0:00:00.975 ********* 2025-07-12 14:07:36.311091 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:07:36.311103 | orchestrator | 2025-07-12 14:07:36.311130 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-07-12 14:07:36.311143 | orchestrator | Saturday 12 July 2025 14:05:21 +0000 (0:00:00.509) 0:00:01.485 ********* 2025-07-12 14:07:36.311159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311298 | orchestrator | 2025-07-12 14:07:36.311311 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-07-12 14:07:36.311324 | orchestrator | Saturday 12 July 2025 14:05:22 +0000 (0:00:00.828) 0:00:02.313 ********* 2025-07-12 14:07:36.311336 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-07-12 14:07:36.311349 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-07-12 14:07:36.311361 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:07:36.311374 | orchestrator | 2025-07-12 14:07:36.311386 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 14:07:36.311398 | orchestrator | Saturday 12 July 2025 14:05:22 +0000 (0:00:00.832) 0:00:03.146 ********* 2025-07-12 14:07:36.311418 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:07:36.311429 | orchestrator | 2025-07-12 14:07:36.311440 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-07-12 14:07:36.311451 | orchestrator | Saturday 12 July 2025 14:05:23 +0000 (0:00:00.703) 0:00:03.850 ********* 2025-07-12 14:07:36.311462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311543 | orchestrator | 2025-07-12 14:07:36.311553 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-07-12 14:07:36.311564 | orchestrator | Saturday 12 July 2025 14:05:25 +0000 (0:00:01.374) 0:00:05.224 ********* 2025-07-12 14:07:36.311575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.311587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.311614 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.311625 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.311636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.311648 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.311659 | orchestrator | 2025-07-12 14:07:36.311670 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-07-12 14:07:36.311681 | orchestrator | Saturday 12 July 2025 14:05:25 +0000 (0:00:00.358) 0:00:05.583 ********* 2025-07-12 14:07:36.311692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.311709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.311721 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.311732 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.311776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 14:07:36.311789 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.311800 | orchestrator | 2025-07-12 14:07:36.311812 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-07-12 14:07:36.311823 | orchestrator | Saturday 12 July 2025 14:05:26 +0000 (0:00:00.801) 0:00:06.384 ********* 2025-07-12 14:07:36.311834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311876 | orchestrator | 2025-07-12 14:07:36.311887 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-07-12 14:07:36.311898 | orchestrator | Saturday 12 July 2025 14:05:27 +0000 (0:00:01.371) 0:00:07.755 ********* 2025-07-12 14:07:36.311914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.311989 | orchestrator | 2025-07-12 14:07:36.312000 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-07-12 14:07:36.312011 | orchestrator | Saturday 12 July 2025 14:05:28 +0000 (0:00:01.319) 0:00:09.075 ********* 2025-07-12 14:07:36.312022 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.312033 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.312044 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.312055 | orchestrator | 2025-07-12 14:07:36.312065 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-07-12 14:07:36.312076 | orchestrator | Saturday 12 July 2025 14:05:29 +0000 (0:00:00.741) 0:00:09.816 ********* 2025-07-12 14:07:36.312087 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 14:07:36.312098 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 14:07:36.312109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 14:07:36.312120 | orchestrator | 2025-07-12 14:07:36.312130 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-07-12 14:07:36.312141 | orchestrator | Saturday 12 July 2025 14:05:31 +0000 (0:00:01.386) 0:00:11.202 ********* 2025-07-12 14:07:36.312152 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 14:07:36.312163 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 14:07:36.312174 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 14:07:36.312185 | orchestrator | 2025-07-12 14:07:36.312196 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-07-12 14:07:36.312206 | orchestrator | Saturday 12 July 2025 14:05:32 +0000 (0:00:01.390) 0:00:12.593 ********* 2025-07-12 14:07:36.312217 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:07:36.312228 | orchestrator | 2025-07-12 14:07:36.312299 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-07-12 14:07:36.312314 | orchestrator | Saturday 12 July 2025 14:05:33 +0000 (0:00:00.877) 0:00:13.470 ********* 2025-07-12 14:07:36.312325 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-07-12 14:07:36.312336 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-07-12 14:07:36.312347 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.312357 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:07:36.312368 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:07:36.312379 | orchestrator | 2025-07-12 14:07:36.312390 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-07-12 14:07:36.312401 | orchestrator | Saturday 12 July 2025 14:05:33 +0000 (0:00:00.684) 0:00:14.154 ********* 2025-07-12 14:07:36.312412 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.312422 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.312433 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.312444 | orchestrator | 2025-07-12 14:07:36.312455 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-07-12 14:07:36.312465 | orchestrator | Saturday 12 July 2025 14:05:34 +0000 (0:00:00.584) 0:00:14.739 ********* 2025-07-12 14:07:36.312483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094379, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.282828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094379, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.282828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094379, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.282828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094344, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2758281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094344, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2758281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094344, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2758281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094330, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.273828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094330, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.273828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094330, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.273828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094365, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2788281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094365, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2788281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094365, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2788281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094301, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.269828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094301, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.269828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094301, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.269828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094334, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.273828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094334, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.273828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094334, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.273828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094358, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2788281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094358, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2788281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094358, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2788281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094300, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.267828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094300, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.267828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094300, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.267828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094261, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.260828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094261, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.260828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.312989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094261, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.260828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094311, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.270828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094311, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.270828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094311, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.270828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094270, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.263828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094270, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.263828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094270, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.263828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094354, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.276828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094354, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.276828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094354, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.276828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094321, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2718282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094321, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2718282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094321, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2718282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094370, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.279828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094370, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.279828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094370, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.279828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094292, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.267828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094292, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.267828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094292, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.267828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094340, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2748282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094340, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2748282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094340, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2748282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094263, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2628279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094263, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2628279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094263, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2628279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094279, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.264828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094279, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.264828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094279, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.264828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094327, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.272828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094327, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.272828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094327, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.272828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094448, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4388301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094448, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4388301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094448, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4388301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094433, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2948284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094433, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2948284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094433, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2948284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094402, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.282828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094402, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.282828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094402, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.282828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094801, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4468303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094801, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4468303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094801, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4468303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094403, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2838283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094403, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2838283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094403, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2838283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094799, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4448302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094799, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4448302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094799, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4448302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094802, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4488301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094802, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4488301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094802, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4488301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094796, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.44283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094796, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.44283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094796, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.44283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094798, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4438303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094798, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4438303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094798, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4438303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094406, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2838283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094406, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2838283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094406, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2838283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094441, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2958283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094441, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2958283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094441, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2958283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094803, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4498303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094803, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4498303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094803, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4498303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094800, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4448302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094800, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4448302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094800, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4448302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094412, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2868283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094412, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2868283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094412, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2868283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.313994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094410, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2848282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094410, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2848282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094410, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2848282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094421, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2878282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094421, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2878282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094421, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2878282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094424, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2928283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094424, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2928283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094424, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2928283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094443, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2958283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094443, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2958283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094443, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2958283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094797, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.44283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094797, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.44283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094797, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.44283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094446, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2958283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094446, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2958283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094446, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.2958283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094805, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4518301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094805, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4518301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094805, 'dev': 93, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752326059.4518301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 14:07:36.314396 | orchestrator | 2025-07-12 14:07:36.314406 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-07-12 14:07:36.314417 | orchestrator | Saturday 12 July 2025 14:06:11 +0000 (0:00:36.988) 0:00:51.728 ********* 2025-07-12 14:07:36.314431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.314442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.314461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 14:07:36.314471 | orchestrator | 2025-07-12 14:07:36.314481 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-07-12 14:07:36.314491 | orchestrator | Saturday 12 July 2025 14:06:12 +0000 (0:00:00.993) 0:00:52.721 ********* 2025-07-12 14:07:36.314501 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:07:36.314511 | orchestrator | 2025-07-12 14:07:36.314521 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-07-12 14:07:36.314531 | orchestrator | Saturday 12 July 2025 14:06:14 +0000 (0:00:02.083) 0:00:54.805 ********* 2025-07-12 14:07:36.314540 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:07:36.314548 | orchestrator | 2025-07-12 14:07:36.314556 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 14:07:36.314564 | orchestrator | Saturday 12 July 2025 14:06:16 +0000 (0:00:02.040) 0:00:56.845 ********* 2025-07-12 14:07:36.314572 | orchestrator | 2025-07-12 14:07:36.314580 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 14:07:36.314588 | orchestrator | Saturday 12 July 2025 14:06:16 +0000 (0:00:00.247) 0:00:57.093 ********* 2025-07-12 14:07:36.314596 | orchestrator | 2025-07-12 14:07:36.314604 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 14:07:36.314612 | orchestrator | Saturday 12 July 2025 14:06:16 +0000 (0:00:00.062) 0:00:57.156 ********* 2025-07-12 14:07:36.314620 | orchestrator | 2025-07-12 14:07:36.314627 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-07-12 14:07:36.314635 | orchestrator | Saturday 12 July 2025 14:06:17 +0000 (0:00:00.075) 0:00:57.231 ********* 2025-07-12 14:07:36.314643 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.314651 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.314659 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:07:36.314667 | orchestrator | 2025-07-12 14:07:36.314675 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-07-12 14:07:36.314683 | orchestrator | Saturday 12 July 2025 14:06:18 +0000 (0:00:01.732) 0:00:58.964 ********* 2025-07-12 14:07:36.314691 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.314699 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.314706 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-07-12 14:07:36.314715 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-07-12 14:07:36.314723 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-07-12 14:07:36.314731 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.314739 | orchestrator | 2025-07-12 14:07:36.314752 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-07-12 14:07:36.314764 | orchestrator | Saturday 12 July 2025 14:06:56 +0000 (0:00:38.084) 0:01:37.048 ********* 2025-07-12 14:07:36.314772 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.314780 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:07:36.314788 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:07:36.314796 | orchestrator | 2025-07-12 14:07:36.314804 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-07-12 14:07:36.314812 | orchestrator | Saturday 12 July 2025 14:07:29 +0000 (0:00:32.226) 0:02:09.275 ********* 2025-07-12 14:07:36.314820 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:07:36.314828 | orchestrator | 2025-07-12 14:07:36.314836 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-07-12 14:07:36.314844 | orchestrator | Saturday 12 July 2025 14:07:31 +0000 (0:00:02.553) 0:02:11.829 ********* 2025-07-12 14:07:36.314852 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.314860 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:07:36.314868 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:07:36.314876 | orchestrator | 2025-07-12 14:07:36.314884 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-07-12 14:07:36.314892 | orchestrator | Saturday 12 July 2025 14:07:31 +0000 (0:00:00.306) 0:02:12.136 ********* 2025-07-12 14:07:36.314904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-07-12 14:07:36.314914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-07-12 14:07:36.314924 | orchestrator | 2025-07-12 14:07:36.314933 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-07-12 14:07:36.314940 | orchestrator | Saturday 12 July 2025 14:07:34 +0000 (0:00:02.526) 0:02:14.662 ********* 2025-07-12 14:07:36.314948 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:07:36.314956 | orchestrator | 2025-07-12 14:07:36.314964 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:07:36.314973 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:07:36.314981 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:07:36.314989 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 14:07:36.314997 | orchestrator | 2025-07-12 14:07:36.315005 | orchestrator | 2025-07-12 14:07:36.315013 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:07:36.315021 | orchestrator | Saturday 12 July 2025 14:07:34 +0000 (0:00:00.302) 0:02:14.965 ********* 2025-07-12 14:07:36.315029 | orchestrator | =============================================================================== 2025-07-12 14:07:36.315037 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.08s 2025-07-12 14:07:36.315045 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.99s 2025-07-12 14:07:36.315053 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.23s 2025-07-12 14:07:36.315061 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.55s 2025-07-12 14:07:36.315069 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.53s 2025-07-12 14:07:36.315081 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.08s 2025-07-12 14:07:36.315090 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.04s 2025-07-12 14:07:36.315098 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.73s 2025-07-12 14:07:36.315106 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.39s 2025-07-12 14:07:36.315113 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.39s 2025-07-12 14:07:36.315121 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.37s 2025-07-12 14:07:36.315129 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2025-07-12 14:07:36.315137 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.32s 2025-07-12 14:07:36.315145 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.99s 2025-07-12 14:07:36.315153 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.88s 2025-07-12 14:07:36.315161 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2025-07-12 14:07:36.315169 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.83s 2025-07-12 14:07:36.315177 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2025-07-12 14:07:36.315185 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.74s 2025-07-12 14:07:36.315193 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.70s 2025-07-12 14:07:36.315205 | orchestrator | 2025-07-12 14:07:36 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:36.315213 | orchestrator | 2025-07-12 14:07:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:39.355494 | orchestrator | 2025-07-12 14:07:39 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:39.355600 | orchestrator | 2025-07-12 14:07:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:42.399910 | orchestrator | 2025-07-12 14:07:42 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:42.400001 | orchestrator | 2025-07-12 14:07:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:45.459097 | orchestrator | 2025-07-12 14:07:45 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:45.459203 | orchestrator | 2025-07-12 14:07:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:48.507322 | orchestrator | 2025-07-12 14:07:48 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:48.507391 | orchestrator | 2025-07-12 14:07:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:51.552501 | orchestrator | 2025-07-12 14:07:51 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:51.552603 | orchestrator | 2025-07-12 14:07:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:54.595671 | orchestrator | 2025-07-12 14:07:54 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:54.595779 | orchestrator | 2025-07-12 14:07:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:07:57.638838 | orchestrator | 2025-07-12 14:07:57 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:07:57.638957 | orchestrator | 2025-07-12 14:07:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:00.683395 | orchestrator | 2025-07-12 14:08:00 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:00.683499 | orchestrator | 2025-07-12 14:08:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:03.723157 | orchestrator | 2025-07-12 14:08:03 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:03.723296 | orchestrator | 2025-07-12 14:08:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:06.747499 | orchestrator | 2025-07-12 14:08:06 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:06.747597 | orchestrator | 2025-07-12 14:08:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:09.793118 | orchestrator | 2025-07-12 14:08:09 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:09.793220 | orchestrator | 2025-07-12 14:08:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:12.824287 | orchestrator | 2025-07-12 14:08:12 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:12.824392 | orchestrator | 2025-07-12 14:08:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:15.856163 | orchestrator | 2025-07-12 14:08:15 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:15.856331 | orchestrator | 2025-07-12 14:08:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:18.915947 | orchestrator | 2025-07-12 14:08:18 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:18.916052 | orchestrator | 2025-07-12 14:08:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:21.946755 | orchestrator | 2025-07-12 14:08:21 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:21.946857 | orchestrator | 2025-07-12 14:08:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:25.000003 | orchestrator | 2025-07-12 14:08:24 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:25.000111 | orchestrator | 2025-07-12 14:08:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:28.041357 | orchestrator | 2025-07-12 14:08:28 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:28.041453 | orchestrator | 2025-07-12 14:08:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:31.077099 | orchestrator | 2025-07-12 14:08:31 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:31.077206 | orchestrator | 2025-07-12 14:08:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:34.115003 | orchestrator | 2025-07-12 14:08:34 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:34.115106 | orchestrator | 2025-07-12 14:08:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:37.167166 | orchestrator | 2025-07-12 14:08:37 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:37.167303 | orchestrator | 2025-07-12 14:08:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:40.223055 | orchestrator | 2025-07-12 14:08:40 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:40.223179 | orchestrator | 2025-07-12 14:08:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:43.271890 | orchestrator | 2025-07-12 14:08:43 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:43.271993 | orchestrator | 2025-07-12 14:08:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:46.316865 | orchestrator | 2025-07-12 14:08:46 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:46.317009 | orchestrator | 2025-07-12 14:08:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:49.357576 | orchestrator | 2025-07-12 14:08:49 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:49.357682 | orchestrator | 2025-07-12 14:08:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:52.417370 | orchestrator | 2025-07-12 14:08:52 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:52.417471 | orchestrator | 2025-07-12 14:08:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:55.468049 | orchestrator | 2025-07-12 14:08:55 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:55.468155 | orchestrator | 2025-07-12 14:08:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:08:58.513061 | orchestrator | 2025-07-12 14:08:58 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:08:58.513162 | orchestrator | 2025-07-12 14:08:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:01.564494 | orchestrator | 2025-07-12 14:09:01 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:01.564597 | orchestrator | 2025-07-12 14:09:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:04.614449 | orchestrator | 2025-07-12 14:09:04 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:04.614675 | orchestrator | 2025-07-12 14:09:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:07.673541 | orchestrator | 2025-07-12 14:09:07 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:07.673646 | orchestrator | 2025-07-12 14:09:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:10.714104 | orchestrator | 2025-07-12 14:09:10 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:10.714250 | orchestrator | 2025-07-12 14:09:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:13.762135 | orchestrator | 2025-07-12 14:09:13 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:13.762280 | orchestrator | 2025-07-12 14:09:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:16.799356 | orchestrator | 2025-07-12 14:09:16 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:16.799465 | orchestrator | 2025-07-12 14:09:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:19.844064 | orchestrator | 2025-07-12 14:09:19 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:19.844162 | orchestrator | 2025-07-12 14:09:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:22.895620 | orchestrator | 2025-07-12 14:09:22 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:22.895719 | orchestrator | 2025-07-12 14:09:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:25.946361 | orchestrator | 2025-07-12 14:09:25 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:25.946467 | orchestrator | 2025-07-12 14:09:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:28.995554 | orchestrator | 2025-07-12 14:09:28 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:28.995681 | orchestrator | 2025-07-12 14:09:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:32.048508 | orchestrator | 2025-07-12 14:09:32 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:32.049007 | orchestrator | 2025-07-12 14:09:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:35.096384 | orchestrator | 2025-07-12 14:09:35 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:35.096484 | orchestrator | 2025-07-12 14:09:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:38.154586 | orchestrator | 2025-07-12 14:09:38 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:38.154692 | orchestrator | 2025-07-12 14:09:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:41.206847 | orchestrator | 2025-07-12 14:09:41 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:41.207004 | orchestrator | 2025-07-12 14:09:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:44.255488 | orchestrator | 2025-07-12 14:09:44 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:44.255597 | orchestrator | 2025-07-12 14:09:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:47.297956 | orchestrator | 2025-07-12 14:09:47 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:47.298116 | orchestrator | 2025-07-12 14:09:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:50.335112 | orchestrator | 2025-07-12 14:09:50 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:50.335258 | orchestrator | 2025-07-12 14:09:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:53.387543 | orchestrator | 2025-07-12 14:09:53 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:53.387646 | orchestrator | 2025-07-12 14:09:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:56.432052 | orchestrator | 2025-07-12 14:09:56 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:56.432163 | orchestrator | 2025-07-12 14:09:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:09:59.474306 | orchestrator | 2025-07-12 14:09:59 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:09:59.474409 | orchestrator | 2025-07-12 14:09:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:02.514980 | orchestrator | 2025-07-12 14:10:02 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:02.515086 | orchestrator | 2025-07-12 14:10:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:05.553864 | orchestrator | 2025-07-12 14:10:05 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:05.553966 | orchestrator | 2025-07-12 14:10:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:08.594549 | orchestrator | 2025-07-12 14:10:08 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:08.598065 | orchestrator | 2025-07-12 14:10:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:11.637565 | orchestrator | 2025-07-12 14:10:11 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:11.637652 | orchestrator | 2025-07-12 14:10:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:14.688813 | orchestrator | 2025-07-12 14:10:14 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:14.688924 | orchestrator | 2025-07-12 14:10:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:17.731578 | orchestrator | 2025-07-12 14:10:17 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:17.731684 | orchestrator | 2025-07-12 14:10:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:20.776741 | orchestrator | 2025-07-12 14:10:20 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:20.776839 | orchestrator | 2025-07-12 14:10:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:23.819449 | orchestrator | 2025-07-12 14:10:23 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:23.819549 | orchestrator | 2025-07-12 14:10:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:26.860789 | orchestrator | 2025-07-12 14:10:26 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:26.860891 | orchestrator | 2025-07-12 14:10:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:29.908627 | orchestrator | 2025-07-12 14:10:29 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:29.908732 | orchestrator | 2025-07-12 14:10:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:32.954746 | orchestrator | 2025-07-12 14:10:32 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:32.954849 | orchestrator | 2025-07-12 14:10:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:35.993935 | orchestrator | 2025-07-12 14:10:35 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:35.994108 | orchestrator | 2025-07-12 14:10:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:39.034484 | orchestrator | 2025-07-12 14:10:39 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:39.034592 | orchestrator | 2025-07-12 14:10:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:42.075544 | orchestrator | 2025-07-12 14:10:42 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:42.075645 | orchestrator | 2025-07-12 14:10:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:45.118950 | orchestrator | 2025-07-12 14:10:45 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:45.119050 | orchestrator | 2025-07-12 14:10:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:48.164449 | orchestrator | 2025-07-12 14:10:48 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:48.164561 | orchestrator | 2025-07-12 14:10:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:51.205778 | orchestrator | 2025-07-12 14:10:51 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:51.205879 | orchestrator | 2025-07-12 14:10:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:54.249649 | orchestrator | 2025-07-12 14:10:54 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:54.249754 | orchestrator | 2025-07-12 14:10:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:10:57.297557 | orchestrator | 2025-07-12 14:10:57 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:10:57.297657 | orchestrator | 2025-07-12 14:10:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:00.329997 | orchestrator | 2025-07-12 14:11:00 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:00.330118 | orchestrator | 2025-07-12 14:11:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:03.348999 | orchestrator | 2025-07-12 14:11:03 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:03.349100 | orchestrator | 2025-07-12 14:11:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:06.370241 | orchestrator | 2025-07-12 14:11:06 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:06.370356 | orchestrator | 2025-07-12 14:11:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:09.409061 | orchestrator | 2025-07-12 14:11:09 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:09.409217 | orchestrator | 2025-07-12 14:11:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:12.459699 | orchestrator | 2025-07-12 14:11:12 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:12.459802 | orchestrator | 2025-07-12 14:11:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:15.508298 | orchestrator | 2025-07-12 14:11:15 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:15.508401 | orchestrator | 2025-07-12 14:11:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:18.554646 | orchestrator | 2025-07-12 14:11:18 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:18.554744 | orchestrator | 2025-07-12 14:11:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:21.606570 | orchestrator | 2025-07-12 14:11:21 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:21.606671 | orchestrator | 2025-07-12 14:11:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:24.660376 | orchestrator | 2025-07-12 14:11:24 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:24.660494 | orchestrator | 2025-07-12 14:11:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:27.708152 | orchestrator | 2025-07-12 14:11:27 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:27.708351 | orchestrator | 2025-07-12 14:11:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:30.757567 | orchestrator | 2025-07-12 14:11:30 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:30.757674 | orchestrator | 2025-07-12 14:11:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:33.802953 | orchestrator | 2025-07-12 14:11:33 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:33.803053 | orchestrator | 2025-07-12 14:11:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:36.853450 | orchestrator | 2025-07-12 14:11:36 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:36.853547 | orchestrator | 2025-07-12 14:11:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:39.900289 | orchestrator | 2025-07-12 14:11:39 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state STARTED 2025-07-12 14:11:39.900393 | orchestrator | 2025-07-12 14:11:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 14:11:42.947634 | orchestrator | 2025-07-12 14:11:42 | INFO  | Task 0dfcf4f4-5159-4b16-9a11-bf355e9ec3ff is in state SUCCESS 2025-07-12 14:11:42.949105 | orchestrator | 2025-07-12 14:11:42.949143 | orchestrator | 2025-07-12 14:11:42.949155 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:11:42.949210 | orchestrator | 2025-07-12 14:11:42.949221 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-07-12 14:11:42.949232 | orchestrator | Saturday 12 July 2025 14:03:06 +0000 (0:00:00.312) 0:00:00.312 ********* 2025-07-12 14:11:42.949242 | orchestrator | changed: [testbed-manager] 2025-07-12 14:11:42.949253 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.949262 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:42.949296 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:42.949306 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.949316 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.949325 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.949335 | orchestrator | 2025-07-12 14:11:42.949345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:11:42.949354 | orchestrator | Saturday 12 July 2025 14:03:07 +0000 (0:00:00.758) 0:00:01.071 ********* 2025-07-12 14:11:42.949364 | orchestrator | changed: [testbed-manager] 2025-07-12 14:11:42.949373 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.949383 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:42.949453 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:42.949467 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.949477 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.950258 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.950373 | orchestrator | 2025-07-12 14:11:42.950399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:11:42.950413 | orchestrator | Saturday 12 July 2025 14:03:08 +0000 (0:00:00.886) 0:00:01.958 ********* 2025-07-12 14:11:42.950432 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-07-12 14:11:42.950452 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 14:11:42.950470 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 14:11:42.950490 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 14:11:42.950509 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-07-12 14:11:42.950524 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-07-12 14:11:42.950535 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-07-12 14:11:42.950546 | orchestrator | 2025-07-12 14:11:42.950558 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-07-12 14:11:42.950568 | orchestrator | 2025-07-12 14:11:42.950580 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 14:11:42.950592 | orchestrator | Saturday 12 July 2025 14:03:09 +0000 (0:00:01.070) 0:00:03.028 ********* 2025-07-12 14:11:42.950603 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:42.950613 | orchestrator | 2025-07-12 14:11:42.950624 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-07-12 14:11:42.950635 | orchestrator | Saturday 12 July 2025 14:03:10 +0000 (0:00:00.645) 0:00:03.674 ********* 2025-07-12 14:11:42.950647 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-07-12 14:11:42.950658 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-07-12 14:11:42.950669 | orchestrator | 2025-07-12 14:11:42.950680 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-07-12 14:11:42.950691 | orchestrator | Saturday 12 July 2025 14:03:15 +0000 (0:00:04.694) 0:00:08.368 ********* 2025-07-12 14:11:42.950702 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 14:11:42.950713 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 14:11:42.950724 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.950735 | orchestrator | 2025-07-12 14:11:42.950746 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 14:11:42.950757 | orchestrator | Saturday 12 July 2025 14:03:19 +0000 (0:00:04.421) 0:00:12.789 ********* 2025-07-12 14:11:42.950767 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.950778 | orchestrator | 2025-07-12 14:11:42.950789 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-07-12 14:11:42.950800 | orchestrator | Saturday 12 July 2025 14:03:20 +0000 (0:00:00.976) 0:00:13.765 ********* 2025-07-12 14:11:42.950811 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.950822 | orchestrator | 2025-07-12 14:11:42.950856 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-07-12 14:11:42.950868 | orchestrator | Saturday 12 July 2025 14:03:22 +0000 (0:00:02.002) 0:00:15.768 ********* 2025-07-12 14:11:42.950900 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.950911 | orchestrator | 2025-07-12 14:11:42.950922 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 14:11:42.950933 | orchestrator | Saturday 12 July 2025 14:03:26 +0000 (0:00:03.689) 0:00:19.457 ********* 2025-07-12 14:11:42.950944 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.950962 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.950980 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.950999 | orchestrator | 2025-07-12 14:11:42.951010 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 14:11:42.951021 | orchestrator | Saturday 12 July 2025 14:03:26 +0000 (0:00:00.277) 0:00:19.735 ********* 2025-07-12 14:11:42.951032 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:42.951044 | orchestrator | 2025-07-12 14:11:42.951055 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-07-12 14:11:42.951066 | orchestrator | Saturday 12 July 2025 14:04:03 +0000 (0:00:37.453) 0:00:57.189 ********* 2025-07-12 14:11:42.951077 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.951087 | orchestrator | 2025-07-12 14:11:42.951098 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 14:11:42.951109 | orchestrator | Saturday 12 July 2025 14:04:17 +0000 (0:00:13.695) 0:01:10.885 ********* 2025-07-12 14:11:42.951120 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:42.951131 | orchestrator | 2025-07-12 14:11:42.951142 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 14:11:42.951153 | orchestrator | Saturday 12 July 2025 14:04:28 +0000 (0:00:11.308) 0:01:22.193 ********* 2025-07-12 14:11:42.951212 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:42.951224 | orchestrator | 2025-07-12 14:11:42.951236 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-07-12 14:11:42.951247 | orchestrator | Saturday 12 July 2025 14:04:30 +0000 (0:00:01.312) 0:01:23.506 ********* 2025-07-12 14:11:42.951258 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.951269 | orchestrator | 2025-07-12 14:11:42.951280 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 14:11:42.951291 | orchestrator | Saturday 12 July 2025 14:04:30 +0000 (0:00:00.478) 0:01:23.984 ********* 2025-07-12 14:11:42.951302 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:42.951314 | orchestrator | 2025-07-12 14:11:42.951325 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 14:11:42.951336 | orchestrator | Saturday 12 July 2025 14:04:31 +0000 (0:00:00.650) 0:01:24.635 ********* 2025-07-12 14:11:42.951347 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:42.951358 | orchestrator | 2025-07-12 14:11:42.951369 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 14:11:42.951380 | orchestrator | Saturday 12 July 2025 14:04:49 +0000 (0:00:18.320) 0:01:42.955 ********* 2025-07-12 14:11:42.951391 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.951402 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.951413 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.951424 | orchestrator | 2025-07-12 14:11:42.951434 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-07-12 14:11:42.951446 | orchestrator | 2025-07-12 14:11:42.951457 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 14:11:42.951467 | orchestrator | Saturday 12 July 2025 14:04:49 +0000 (0:00:00.293) 0:01:43.249 ********* 2025-07-12 14:11:42.951478 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:42.951489 | orchestrator | 2025-07-12 14:11:42.951500 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-07-12 14:11:42.951511 | orchestrator | Saturday 12 July 2025 14:04:50 +0000 (0:00:00.583) 0:01:43.832 ********* 2025-07-12 14:11:42.951522 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.951543 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.951554 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.951565 | orchestrator | 2025-07-12 14:11:42.951576 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-07-12 14:11:42.951587 | orchestrator | Saturday 12 July 2025 14:04:52 +0000 (0:00:02.191) 0:01:46.023 ********* 2025-07-12 14:11:42.951598 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.951609 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.951620 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.951631 | orchestrator | 2025-07-12 14:11:42.951642 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 14:11:42.951653 | orchestrator | Saturday 12 July 2025 14:04:54 +0000 (0:00:02.177) 0:01:48.200 ********* 2025-07-12 14:11:42.951664 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.951675 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.951686 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.951697 | orchestrator | 2025-07-12 14:11:42.951708 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 14:11:42.951719 | orchestrator | Saturday 12 July 2025 14:04:55 +0000 (0:00:00.332) 0:01:48.533 ********* 2025-07-12 14:11:42.951730 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 14:11:42.951742 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.951753 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 14:11:42.951764 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.951775 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 14:11:42.951786 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-07-12 14:11:42.951797 | orchestrator | 2025-07-12 14:11:42.951808 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 14:11:42.951819 | orchestrator | Saturday 12 July 2025 14:05:03 +0000 (0:00:08.264) 0:01:56.797 ********* 2025-07-12 14:11:42.951830 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.951841 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.951852 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.951863 | orchestrator | 2025-07-12 14:11:42.951880 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 14:11:42.951891 | orchestrator | Saturday 12 July 2025 14:05:03 +0000 (0:00:00.342) 0:01:57.140 ********* 2025-07-12 14:11:42.951902 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 14:11:42.951913 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.951924 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 14:11:42.951935 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.951946 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 14:11:42.951957 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.951968 | orchestrator | 2025-07-12 14:11:42.951979 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 14:11:42.951990 | orchestrator | Saturday 12 July 2025 14:05:04 +0000 (0:00:00.628) 0:01:57.769 ********* 2025-07-12 14:11:42.952001 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.952012 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.952023 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.952034 | orchestrator | 2025-07-12 14:11:42.952045 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-07-12 14:11:42.952057 | orchestrator | Saturday 12 July 2025 14:05:04 +0000 (0:00:00.523) 0:01:58.292 ********* 2025-07-12 14:11:42.952067 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.952079 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.952090 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.952100 | orchestrator | 2025-07-12 14:11:42.952112 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-07-12 14:11:42.952123 | orchestrator | Saturday 12 July 2025 14:05:05 +0000 (0:00:00.955) 0:01:59.248 ********* 2025-07-12 14:11:42.952140 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.952152 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.952383 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.952404 | orchestrator | 2025-07-12 14:11:42.952416 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-07-12 14:11:42.952427 | orchestrator | Saturday 12 July 2025 14:05:07 +0000 (0:00:02.077) 0:02:01.325 ********* 2025-07-12 14:11:42.952438 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.952449 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.952460 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:42.952471 | orchestrator | 2025-07-12 14:11:42.952482 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 14:11:42.952493 | orchestrator | Saturday 12 July 2025 14:05:28 +0000 (0:00:20.784) 0:02:22.110 ********* 2025-07-12 14:11:42.952504 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.952515 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.952526 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:42.952537 | orchestrator | 2025-07-12 14:11:42.952548 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 14:11:42.952559 | orchestrator | Saturday 12 July 2025 14:05:40 +0000 (0:00:11.996) 0:02:34.107 ********* 2025-07-12 14:11:42.952570 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:42.952581 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.952591 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.952602 | orchestrator | 2025-07-12 14:11:42.952613 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-07-12 14:11:42.952624 | orchestrator | Saturday 12 July 2025 14:05:41 +0000 (0:00:00.917) 0:02:35.024 ********* 2025-07-12 14:11:42.952635 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.952646 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.952657 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.952668 | orchestrator | 2025-07-12 14:11:42.952679 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-07-12 14:11:42.952691 | orchestrator | Saturday 12 July 2025 14:05:52 +0000 (0:00:11.250) 0:02:46.274 ********* 2025-07-12 14:11:42.952701 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.952714 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.952733 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.952753 | orchestrator | 2025-07-12 14:11:42.952773 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 14:11:42.952792 | orchestrator | Saturday 12 July 2025 14:05:54 +0000 (0:00:01.482) 0:02:47.757 ********* 2025-07-12 14:11:42.952811 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.952831 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.952852 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.952873 | orchestrator | 2025-07-12 14:11:42.952894 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-07-12 14:11:42.952914 | orchestrator | 2025-07-12 14:11:42.952934 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 14:11:42.952956 | orchestrator | Saturday 12 July 2025 14:05:54 +0000 (0:00:00.296) 0:02:48.053 ********* 2025-07-12 14:11:42.952978 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:42.953001 | orchestrator | 2025-07-12 14:11:42.953021 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-07-12 14:11:42.953099 | orchestrator | Saturday 12 July 2025 14:05:55 +0000 (0:00:00.547) 0:02:48.600 ********* 2025-07-12 14:11:42.953115 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-07-12 14:11:42.953128 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-07-12 14:11:42.953140 | orchestrator | 2025-07-12 14:11:42.953152 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-07-12 14:11:42.953181 | orchestrator | Saturday 12 July 2025 14:05:58 +0000 (0:00:03.107) 0:02:51.707 ********* 2025-07-12 14:11:42.953214 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-07-12 14:11:42.953237 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-07-12 14:11:42.953251 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-07-12 14:11:42.953272 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-07-12 14:11:42.953291 | orchestrator | 2025-07-12 14:11:42.953310 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-07-12 14:11:42.953328 | orchestrator | Saturday 12 July 2025 14:06:04 +0000 (0:00:06.577) 0:02:58.285 ********* 2025-07-12 14:11:42.953349 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 14:11:42.953360 | orchestrator | 2025-07-12 14:11:42.953371 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-07-12 14:11:42.953382 | orchestrator | Saturday 12 July 2025 14:06:08 +0000 (0:00:03.104) 0:03:01.389 ********* 2025-07-12 14:11:42.953393 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 14:11:42.953404 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-07-12 14:11:42.953415 | orchestrator | 2025-07-12 14:11:42.953425 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-07-12 14:11:42.953436 | orchestrator | Saturday 12 July 2025 14:06:11 +0000 (0:00:03.790) 0:03:05.180 ********* 2025-07-12 14:11:42.953447 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 14:11:42.953458 | orchestrator | 2025-07-12 14:11:42.953469 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-07-12 14:11:42.953480 | orchestrator | Saturday 12 July 2025 14:06:14 +0000 (0:00:03.031) 0:03:08.212 ********* 2025-07-12 14:11:42.953491 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-07-12 14:11:42.953502 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-07-12 14:11:42.953513 | orchestrator | 2025-07-12 14:11:42.953524 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 14:11:42.953661 | orchestrator | Saturday 12 July 2025 14:06:22 +0000 (0:00:07.325) 0:03:15.538 ********* 2025-07-12 14:11:42.953693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.953807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.953847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.953941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.953961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.953973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.953985 | orchestrator | 2025-07-12 14:11:42.953996 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-07-12 14:11:42.954007 | orchestrator | Saturday 12 July 2025 14:06:23 +0000 (0:00:01.275) 0:03:16.813 ********* 2025-07-12 14:11:42.954106 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.954120 | orchestrator | 2025-07-12 14:11:42.954131 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-07-12 14:11:42.954142 | orchestrator | Saturday 12 July 2025 14:06:23 +0000 (0:00:00.121) 0:03:16.934 ********* 2025-07-12 14:11:42.954154 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.954196 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.954215 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.954227 | orchestrator | 2025-07-12 14:11:42.954238 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-07-12 14:11:42.954249 | orchestrator | Saturday 12 July 2025 14:06:24 +0000 (0:00:00.539) 0:03:17.474 ********* 2025-07-12 14:11:42.954260 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 14:11:42.954271 | orchestrator | 2025-07-12 14:11:42.954282 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-07-12 14:11:42.954293 | orchestrator | Saturday 12 July 2025 14:06:24 +0000 (0:00:00.674) 0:03:18.148 ********* 2025-07-12 14:11:42.954304 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.954315 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.954326 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.954337 | orchestrator | 2025-07-12 14:11:42.954348 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 14:11:42.954359 | orchestrator | Saturday 12 July 2025 14:06:25 +0000 (0:00:00.290) 0:03:18.438 ********* 2025-07-12 14:11:42.954371 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:42.954382 | orchestrator | 2025-07-12 14:11:42.954393 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 14:11:42.954404 | orchestrator | Saturday 12 July 2025 14:06:25 +0000 (0:00:00.739) 0:03:19.178 ********* 2025-07-12 14:11:42.954423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.954483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.954510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.954528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.954541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.954584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.954598 | orchestrator | 2025-07-12 14:11:42.954610 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 14:11:42.954630 | orchestrator | Saturday 12 July 2025 14:06:28 +0000 (0:00:02.261) 0:03:21.439 ********* 2025-07-12 14:11:42.954651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:42.954682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.954694 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.954730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:42.954753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.954771 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.954847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:42.954883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.954895 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.954907 | orchestrator | 2025-07-12 14:11:42.954918 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 14:11:42.954929 | orchestrator | Saturday 12 July 2025 14:06:28 +0000 (0:00:00.576) 0:03:22.016 ********* 2025-07-12 14:11:42.954947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:42.954960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.954971 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.955020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:42.955042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.955054 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.955066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:42.955083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.955095 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.955106 | orchestrator | 2025-07-12 14:11:42.955117 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-07-12 14:11:42.955129 | orchestrator | Saturday 12 July 2025 14:06:29 +0000 (0:00:01.004) 0:03:23.021 ********* 2025-07-12 14:11:42.955204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.955229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.955247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.955260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.955322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.955358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.955377 | orchestrator | 2025-07-12 14:11:42.955397 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-07-12 14:11:42.955416 | orchestrator | Saturday 12 July 2025 14:06:32 +0000 (0:00:02.455) 0:03:25.476 ********* 2025-07-12 14:11:42.955436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.955464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.955530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.955567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.955591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.955611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.955624 | orchestrator | 2025-07-12 14:11:42.955636 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-07-12 14:11:42.955647 | orchestrator | Saturday 12 July 2025 14:06:37 +0000 (0:00:05.483) 0:03:30.959 ********* 2025-07-12 14:11:42.955664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:42.955719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.955734 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.955745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:42.955757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.955769 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.955786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 14:11:42.955805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.955816 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.955827 | orchestrator | 2025-07-12 14:11:42.955839 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-07-12 14:11:42.955850 | orchestrator | Saturday 12 July 2025 14:06:38 +0000 (0:00:00.615) 0:03:31.575 ********* 2025-07-12 14:11:42.955863 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.955883 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:42.955903 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:42.955921 | orchestrator | 2025-07-12 14:11:42.955984 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-07-12 14:11:42.956005 | orchestrator | Saturday 12 July 2025 14:06:40 +0000 (0:00:01.936) 0:03:33.512 ********* 2025-07-12 14:11:42.956024 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.956043 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.956062 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.956080 | orchestrator | 2025-07-12 14:11:42.956100 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-07-12 14:11:42.956120 | orchestrator | Saturday 12 July 2025 14:06:40 +0000 (0:00:00.307) 0:03:33.820 ********* 2025-07-12 14:11:42.956141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.956187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.956282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 14:11:42.956300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.956312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.956324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.956335 | orchestrator | 2025-07-12 14:11:42.956346 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 14:11:42.956358 | orchestrator | Saturday 12 July 2025 14:06:42 +0000 (0:00:01.885) 0:03:35.706 ********* 2025-07-12 14:11:42.956369 | orchestrator | 2025-07-12 14:11:42.956380 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 14:11:42.956391 | orchestrator | Saturday 12 July 2025 14:06:42 +0000 (0:00:00.142) 0:03:35.848 ********* 2025-07-12 14:11:42.956401 | orchestrator | 2025-07-12 14:11:42.956412 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 14:11:42.956423 | orchestrator | Saturday 12 July 2025 14:06:42 +0000 (0:00:00.131) 0:03:35.980 ********* 2025-07-12 14:11:42.956434 | orchestrator | 2025-07-12 14:11:42.956458 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-07-12 14:11:42.956478 | orchestrator | Saturday 12 July 2025 14:06:42 +0000 (0:00:00.293) 0:03:36.273 ********* 2025-07-12 14:11:42.956499 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.956518 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:42.956538 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:42.956558 | orchestrator | 2025-07-12 14:11:42.956578 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-07-12 14:11:42.956589 | orchestrator | Saturday 12 July 2025 14:07:06 +0000 (0:00:23.129) 0:03:59.402 ********* 2025-07-12 14:11:42.956600 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.956611 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:42.956622 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:42.956633 | orchestrator | 2025-07-12 14:11:42.956644 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-07-12 14:11:42.956655 | orchestrator | 2025-07-12 14:11:42.956671 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:42.956683 | orchestrator | Saturday 12 July 2025 14:07:16 +0000 (0:00:10.815) 0:04:10.218 ********* 2025-07-12 14:11:42.956694 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:42.956706 | orchestrator | 2025-07-12 14:11:42.956725 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:42.956745 | orchestrator | Saturday 12 July 2025 14:07:18 +0000 (0:00:01.194) 0:04:11.412 ********* 2025-07-12 14:11:42.956763 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.956783 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.956801 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.956819 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.956831 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.956849 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.956867 | orchestrator | 2025-07-12 14:11:42.956886 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-07-12 14:11:42.956905 | orchestrator | Saturday 12 July 2025 14:07:18 +0000 (0:00:00.768) 0:04:12.180 ********* 2025-07-12 14:11:42.956924 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.956942 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.956961 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.956980 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:11:42.956998 | orchestrator | 2025-07-12 14:11:42.957018 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 14:11:42.957090 | orchestrator | Saturday 12 July 2025 14:07:19 +0000 (0:00:00.995) 0:04:13.176 ********* 2025-07-12 14:11:42.957116 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-07-12 14:11:42.957136 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-07-12 14:11:42.957154 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-07-12 14:11:42.957246 | orchestrator | 2025-07-12 14:11:42.957267 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 14:11:42.957287 | orchestrator | Saturday 12 July 2025 14:07:20 +0000 (0:00:00.707) 0:04:13.883 ********* 2025-07-12 14:11:42.957304 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-12 14:11:42.957322 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-12 14:11:42.957340 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-12 14:11:42.957360 | orchestrator | 2025-07-12 14:11:42.957377 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 14:11:42.957396 | orchestrator | Saturday 12 July 2025 14:07:21 +0000 (0:00:01.129) 0:04:15.012 ********* 2025-07-12 14:11:42.957408 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-07-12 14:11:42.957419 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.957441 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-07-12 14:11:42.957452 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.957463 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-07-12 14:11:42.957474 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.957485 | orchestrator | 2025-07-12 14:11:42.957496 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-07-12 14:11:42.957507 | orchestrator | Saturday 12 July 2025 14:07:22 +0000 (0:00:00.738) 0:04:15.751 ********* 2025-07-12 14:11:42.957519 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 14:11:42.957530 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 14:11:42.957541 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.957552 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 14:11:42.957563 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 14:11:42.957575 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.957586 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 14:11:42.957597 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 14:11:42.957608 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 14:11:42.957619 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.957630 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 14:11:42.957641 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 14:11:42.957652 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 14:11:42.957663 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 14:11:42.957674 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 14:11:42.957685 | orchestrator | 2025-07-12 14:11:42.957696 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-07-12 14:11:42.957707 | orchestrator | Saturday 12 July 2025 14:07:23 +0000 (0:00:01.032) 0:04:16.784 ********* 2025-07-12 14:11:42.957718 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.957729 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.957740 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.957750 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.957760 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.957770 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.957780 | orchestrator | 2025-07-12 14:11:42.957790 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-07-12 14:11:42.957799 | orchestrator | Saturday 12 July 2025 14:07:24 +0000 (0:00:01.384) 0:04:18.169 ********* 2025-07-12 14:11:42.957809 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.957819 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.957835 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.957846 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.957856 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.957872 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.957890 | orchestrator | 2025-07-12 14:11:42.957907 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 14:11:42.957922 | orchestrator | Saturday 12 July 2025 14:07:26 +0000 (0:00:01.661) 0:04:19.830 ********* 2025-07-12 14:11:42.957941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958157 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958303 | orchestrator | 2025-07-12 14:11:42.958313 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:42.958323 | orchestrator | Saturday 12 July 2025 14:07:28 +0000 (0:00:02.503) 0:04:22.334 ********* 2025-07-12 14:11:42.958333 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:11:42.958344 | orchestrator | 2025-07-12 14:11:42.958354 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 14:11:42.958364 | orchestrator | Saturday 12 July 2025 14:07:30 +0000 (0:00:01.243) 0:04:23.577 ********* 2025-07-12 14:11:42.958374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958477 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.958625 | orchestrator | 2025-07-12 14:11:42.958642 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 14:11:42.958659 | orchestrator | Saturday 12 July 2025 14:07:33 +0000 (0:00:03.732) 0:04:27.310 ********* 2025-07-12 14:11:42.958711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.958731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.958749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.958774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.958800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.958811 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.958853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.958866 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.958876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.958886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.958896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.958915 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.958930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:42.958941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.958954 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.959014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:42.959028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.959038 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.959048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:42.959059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.959076 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.959086 | orchestrator | 2025-07-12 14:11:42.959096 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 14:11:42.959106 | orchestrator | Saturday 12 July 2025 14:07:35 +0000 (0:00:01.885) 0:04:29.195 ********* 2025-07-12 14:11:42.959121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.959132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.959237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.959252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.959263 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.959273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.959294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.959304 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.959320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.959358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.959370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.959380 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.959390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:42.959407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.959418 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.959428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:42.959443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.959453 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.959464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:42.959499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.959511 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.959522 | orchestrator | 2025-07-12 14:11:42.959532 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:42.959542 | orchestrator | Saturday 12 July 2025 14:07:37 +0000 (0:00:02.064) 0:04:31.259 ********* 2025-07-12 14:11:42.959552 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.959561 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.959571 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.959581 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 14:11:42.959591 | orchestrator | 2025-07-12 14:11:42.959601 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-07-12 14:11:42.959617 | orchestrator | Saturday 12 July 2025 14:07:38 +0000 (0:00:00.869) 0:04:32.129 ********* 2025-07-12 14:11:42.959627 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 14:11:42.959637 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 14:11:42.959647 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 14:11:42.959656 | orchestrator | 2025-07-12 14:11:42.959666 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-07-12 14:11:42.959676 | orchestrator | Saturday 12 July 2025 14:07:39 +0000 (0:00:01.090) 0:04:33.219 ********* 2025-07-12 14:11:42.959685 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 14:11:42.959695 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 14:11:42.959705 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 14:11:42.959715 | orchestrator | 2025-07-12 14:11:42.959724 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-07-12 14:11:42.959734 | orchestrator | Saturday 12 July 2025 14:07:40 +0000 (0:00:00.931) 0:04:34.150 ********* 2025-07-12 14:11:42.959743 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:11:42.959751 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:11:42.959759 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:11:42.959767 | orchestrator | 2025-07-12 14:11:42.959775 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-07-12 14:11:42.959783 | orchestrator | Saturday 12 July 2025 14:07:41 +0000 (0:00:00.513) 0:04:34.663 ********* 2025-07-12 14:11:42.959791 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:11:42.959799 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:11:42.959807 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:11:42.959815 | orchestrator | 2025-07-12 14:11:42.959823 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-07-12 14:11:42.959831 | orchestrator | Saturday 12 July 2025 14:07:41 +0000 (0:00:00.506) 0:04:35.170 ********* 2025-07-12 14:11:42.959839 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 14:11:42.959847 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 14:11:42.959855 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 14:11:42.959863 | orchestrator | 2025-07-12 14:11:42.959871 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-07-12 14:11:42.959879 | orchestrator | Saturday 12 July 2025 14:07:43 +0000 (0:00:01.342) 0:04:36.512 ********* 2025-07-12 14:11:42.959887 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 14:11:42.959895 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 14:11:42.959903 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 14:11:42.959911 | orchestrator | 2025-07-12 14:11:42.959919 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-07-12 14:11:42.959926 | orchestrator | Saturday 12 July 2025 14:07:44 +0000 (0:00:01.282) 0:04:37.794 ********* 2025-07-12 14:11:42.959938 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 14:11:42.959946 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 14:11:42.959954 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 14:11:42.959962 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-07-12 14:11:42.959970 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-07-12 14:11:42.959978 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-07-12 14:11:42.959986 | orchestrator | 2025-07-12 14:11:42.959994 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-07-12 14:11:42.960002 | orchestrator | Saturday 12 July 2025 14:07:48 +0000 (0:00:03.710) 0:04:41.505 ********* 2025-07-12 14:11:42.960010 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.960018 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.960026 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.960034 | orchestrator | 2025-07-12 14:11:42.960042 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-07-12 14:11:42.960055 | orchestrator | Saturday 12 July 2025 14:07:48 +0000 (0:00:00.302) 0:04:41.807 ********* 2025-07-12 14:11:42.960063 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.960071 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.960079 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.960087 | orchestrator | 2025-07-12 14:11:42.960095 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-07-12 14:11:42.960103 | orchestrator | Saturday 12 July 2025 14:07:48 +0000 (0:00:00.299) 0:04:42.107 ********* 2025-07-12 14:11:42.960111 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.960119 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.960127 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.960135 | orchestrator | 2025-07-12 14:11:42.960185 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-07-12 14:11:42.960196 | orchestrator | Saturday 12 July 2025 14:07:50 +0000 (0:00:01.459) 0:04:43.566 ********* 2025-07-12 14:11:42.960205 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 14:11:42.960213 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 14:11:42.960221 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 14:11:42.960229 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 14:11:42.960237 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 14:11:42.960246 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 14:11:42.960254 | orchestrator | 2025-07-12 14:11:42.960262 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-07-12 14:11:42.960270 | orchestrator | Saturday 12 July 2025 14:07:53 +0000 (0:00:03.172) 0:04:46.738 ********* 2025-07-12 14:11:42.960278 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 14:11:42.960287 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 14:11:42.960295 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 14:11:42.960303 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 14:11:42.960311 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.960318 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 14:11:42.960327 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.960335 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 14:11:42.960343 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.960351 | orchestrator | 2025-07-12 14:11:42.960359 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-07-12 14:11:42.960367 | orchestrator | Saturday 12 July 2025 14:07:56 +0000 (0:00:03.254) 0:04:49.992 ********* 2025-07-12 14:11:42.960375 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.960383 | orchestrator | 2025-07-12 14:11:42.960391 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-07-12 14:11:42.960399 | orchestrator | Saturday 12 July 2025 14:07:56 +0000 (0:00:00.119) 0:04:50.112 ********* 2025-07-12 14:11:42.960407 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.960415 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.960423 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.960431 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.960439 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.960447 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.960455 | orchestrator | 2025-07-12 14:11:42.960463 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-07-12 14:11:42.960478 | orchestrator | Saturday 12 July 2025 14:07:57 +0000 (0:00:00.767) 0:04:50.880 ********* 2025-07-12 14:11:42.960486 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 14:11:42.960494 | orchestrator | 2025-07-12 14:11:42.960502 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-07-12 14:11:42.960510 | orchestrator | Saturday 12 July 2025 14:07:58 +0000 (0:00:00.672) 0:04:51.552 ********* 2025-07-12 14:11:42.960518 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.960526 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.960534 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.960542 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.960550 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.960558 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.960565 | orchestrator | 2025-07-12 14:11:42.960573 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-07-12 14:11:42.960588 | orchestrator | Saturday 12 July 2025 14:07:58 +0000 (0:00:00.550) 0:04:52.102 ********* 2025-07-12 14:11:42.960597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960747 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960799 | orchestrator | 2025-07-12 14:11:42.960812 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-07-12 14:11:42.960821 | orchestrator | Saturday 12 July 2025 14:08:02 +0000 (0:00:04.027) 0:04:56.130 ********* 2025-07-12 14:11:42.960830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.960845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.960857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.960866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.960880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.960889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.960902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.960995 | orchestrator | 2025-07-12 14:11:42.961003 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-07-12 14:11:42.961011 | orchestrator | Saturday 12 July 2025 14:08:09 +0000 (0:00:06.296) 0:05:02.426 ********* 2025-07-12 14:11:42.961019 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.961027 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.961035 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.961043 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.961051 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.961059 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.961067 | orchestrator | 2025-07-12 14:11:42.961075 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-07-12 14:11:42.961083 | orchestrator | Saturday 12 July 2025 14:08:10 +0000 (0:00:01.592) 0:05:04.018 ********* 2025-07-12 14:11:42.961091 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 14:11:42.961099 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 14:11:42.961107 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 14:11:42.961115 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 14:11:42.961127 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 14:11:42.961136 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 14:11:42.961144 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 14:11:42.961156 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 14:11:42.961183 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.961191 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.961199 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 14:11:42.961207 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.961215 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 14:11:42.961223 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 14:11:42.961231 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 14:11:42.961239 | orchestrator | 2025-07-12 14:11:42.961247 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-07-12 14:11:42.961255 | orchestrator | Saturday 12 July 2025 14:08:14 +0000 (0:00:03.643) 0:05:07.662 ********* 2025-07-12 14:11:42.961263 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.961271 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.961279 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.961287 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.961295 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.961302 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.961311 | orchestrator | 2025-07-12 14:11:42.961318 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-07-12 14:11:42.961326 | orchestrator | Saturday 12 July 2025 14:08:15 +0000 (0:00:00.849) 0:05:08.512 ********* 2025-07-12 14:11:42.961334 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 14:11:42.961348 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 14:11:42.961361 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 14:11:42.961375 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 14:11:42.961388 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 14:11:42.961400 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:42.961409 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 14:11:42.961416 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:42.961424 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:42.961432 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:42.961440 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.961448 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:42.961456 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.961469 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 14:11:42.961483 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.961496 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:42.961509 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:42.961523 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:42.961545 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:42.961553 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:42.961561 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 14:11:42.961569 | orchestrator | 2025-07-12 14:11:42.961577 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-07-12 14:11:42.961585 | orchestrator | Saturday 12 July 2025 14:08:20 +0000 (0:00:05.271) 0:05:13.784 ********* 2025-07-12 14:11:42.961593 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 14:11:42.961602 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 14:11:42.961615 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 14:11:42.961623 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 14:11:42.961631 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 14:11:42.961639 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 14:11:42.961647 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 14:11:42.961655 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 14:11:42.961663 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 14:11:42.961671 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 14:11:42.961678 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 14:11:42.961686 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 14:11:42.961694 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.961702 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 14:11:42.961710 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 14:11:42.961718 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.961726 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 14:11:42.961734 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.961742 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 14:11:42.961750 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 14:11:42.961758 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 14:11:42.961766 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 14:11:42.961774 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 14:11:42.961782 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 14:11:42.961790 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 14:11:42.961797 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 14:11:42.961806 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 14:11:42.961813 | orchestrator | 2025-07-12 14:11:42.961821 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-07-12 14:11:42.961829 | orchestrator | Saturday 12 July 2025 14:08:27 +0000 (0:00:07.056) 0:05:20.841 ********* 2025-07-12 14:11:42.961842 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.961850 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.961858 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.961866 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.961874 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.961882 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.961890 | orchestrator | 2025-07-12 14:11:42.961898 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-07-12 14:11:42.961906 | orchestrator | Saturday 12 July 2025 14:08:28 +0000 (0:00:00.549) 0:05:21.390 ********* 2025-07-12 14:11:42.961914 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.961922 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.961930 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.961938 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.961946 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.961954 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.961964 | orchestrator | 2025-07-12 14:11:42.961987 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-07-12 14:11:42.961999 | orchestrator | Saturday 12 July 2025 14:08:28 +0000 (0:00:00.772) 0:05:22.163 ********* 2025-07-12 14:11:42.962007 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.962044 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.962062 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.962076 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.962090 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.962103 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.962114 | orchestrator | 2025-07-12 14:11:42.962123 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-07-12 14:11:42.962131 | orchestrator | Saturday 12 July 2025 14:08:30 +0000 (0:00:02.004) 0:05:24.168 ********* 2025-07-12 14:11:42.962146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.962263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.962295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.962313 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.962322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.962336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.962344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.962353 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.962371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 14:11:42.962379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 14:11:42.962393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.962402 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.962410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:42.962422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.962430 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.962439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:42.962452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 14:11:42.962461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.962474 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.962483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 14:11:42.962491 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.962499 | orchestrator | 2025-07-12 14:11:42.962507 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-07-12 14:11:42.962515 | orchestrator | Saturday 12 July 2025 14:08:32 +0000 (0:00:01.662) 0:05:25.830 ********* 2025-07-12 14:11:42.962523 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 14:11:42.962531 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 14:11:42.962539 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.962547 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 14:11:42.962555 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 14:11:42.962563 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.962571 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 14:11:42.962579 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 14:11:42.962587 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.962595 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 14:11:42.962602 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 14:11:42.962610 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.962618 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 14:11:42.962625 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 14:11:42.962632 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.962638 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 14:11:42.962645 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 14:11:42.962652 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.962658 | orchestrator | 2025-07-12 14:11:42.962668 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-07-12 14:11:42.962675 | orchestrator | Saturday 12 July 2025 14:08:33 +0000 (0:00:00.636) 0:05:26.467 ********* 2025-07-12 14:11:42.962682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962706 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962761 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 14:11:42.962823 | orchestrator | 2025-07-12 14:11:42.962830 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 14:11:42.962836 | orchestrator | Saturday 12 July 2025 14:08:36 +0000 (0:00:03.110) 0:05:29.577 ********* 2025-07-12 14:11:42.962843 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.962850 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.962857 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.962863 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.962870 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.962877 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.962883 | orchestrator | 2025-07-12 14:11:42.962890 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:42.962896 | orchestrator | Saturday 12 July 2025 14:08:36 +0000 (0:00:00.670) 0:05:30.248 ********* 2025-07-12 14:11:42.962903 | orchestrator | 2025-07-12 14:11:42.962910 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:42.962916 | orchestrator | Saturday 12 July 2025 14:08:37 +0000 (0:00:00.359) 0:05:30.607 ********* 2025-07-12 14:11:42.962923 | orchestrator | 2025-07-12 14:11:42.962929 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:42.962936 | orchestrator | Saturday 12 July 2025 14:08:37 +0000 (0:00:00.135) 0:05:30.743 ********* 2025-07-12 14:11:42.962943 | orchestrator | 2025-07-12 14:11:42.962950 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:42.962956 | orchestrator | Saturday 12 July 2025 14:08:37 +0000 (0:00:00.140) 0:05:30.883 ********* 2025-07-12 14:11:42.962963 | orchestrator | 2025-07-12 14:11:42.962969 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:42.962976 | orchestrator | Saturday 12 July 2025 14:08:37 +0000 (0:00:00.139) 0:05:31.023 ********* 2025-07-12 14:11:42.962983 | orchestrator | 2025-07-12 14:11:42.962989 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 14:11:42.962996 | orchestrator | Saturday 12 July 2025 14:08:37 +0000 (0:00:00.131) 0:05:31.155 ********* 2025-07-12 14:11:42.963002 | orchestrator | 2025-07-12 14:11:42.963009 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-07-12 14:11:42.963016 | orchestrator | Saturday 12 July 2025 14:08:37 +0000 (0:00:00.133) 0:05:31.288 ********* 2025-07-12 14:11:42.963022 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.963029 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:42.963036 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:42.963042 | orchestrator | 2025-07-12 14:11:42.963049 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-07-12 14:11:42.963055 | orchestrator | Saturday 12 July 2025 14:08:50 +0000 (0:00:12.253) 0:05:43.541 ********* 2025-07-12 14:11:42.963066 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.963073 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:42.963080 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:42.963086 | orchestrator | 2025-07-12 14:11:42.963096 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-07-12 14:11:42.963103 | orchestrator | Saturday 12 July 2025 14:09:02 +0000 (0:00:12.740) 0:05:56.282 ********* 2025-07-12 14:11:42.963110 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.963117 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.963123 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.963130 | orchestrator | 2025-07-12 14:11:42.963137 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-07-12 14:11:42.963143 | orchestrator | Saturday 12 July 2025 14:09:23 +0000 (0:00:20.526) 0:06:16.808 ********* 2025-07-12 14:11:42.963150 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.963157 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.963204 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.963212 | orchestrator | 2025-07-12 14:11:42.963219 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-07-12 14:11:42.963226 | orchestrator | Saturday 12 July 2025 14:10:05 +0000 (0:00:42.003) 0:06:58.812 ********* 2025-07-12 14:11:42.963232 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.963239 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.963246 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.963253 | orchestrator | 2025-07-12 14:11:42.963259 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-07-12 14:11:42.963266 | orchestrator | Saturday 12 July 2025 14:10:06 +0000 (0:00:00.977) 0:06:59.790 ********* 2025-07-12 14:11:42.963273 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.963280 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.963286 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.963293 | orchestrator | 2025-07-12 14:11:42.963300 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-07-12 14:11:42.963310 | orchestrator | Saturday 12 July 2025 14:10:07 +0000 (0:00:00.767) 0:07:00.557 ********* 2025-07-12 14:11:42.963317 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:11:42.963324 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:11:42.963331 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:11:42.963338 | orchestrator | 2025-07-12 14:11:42.963344 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-07-12 14:11:42.963351 | orchestrator | Saturday 12 July 2025 14:10:34 +0000 (0:00:27.228) 0:07:27.785 ********* 2025-07-12 14:11:42.963358 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.963364 | orchestrator | 2025-07-12 14:11:42.963371 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-07-12 14:11:42.963378 | orchestrator | Saturday 12 July 2025 14:10:34 +0000 (0:00:00.131) 0:07:27.917 ********* 2025-07-12 14:11:42.963385 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.963391 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.963398 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.963405 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.963412 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.963419 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-07-12 14:11:42.963425 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:11:42.963432 | orchestrator | 2025-07-12 14:11:42.963439 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-07-12 14:11:42.963446 | orchestrator | Saturday 12 July 2025 14:10:56 +0000 (0:00:22.272) 0:07:50.190 ********* 2025-07-12 14:11:42.963453 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.963459 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.963466 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.963478 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.963485 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.963491 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.963498 | orchestrator | 2025-07-12 14:11:42.963505 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-07-12 14:11:42.963512 | orchestrator | Saturday 12 July 2025 14:11:04 +0000 (0:00:08.086) 0:07:58.276 ********* 2025-07-12 14:11:42.963518 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.963525 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.963532 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.963538 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.963545 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.963552 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-07-12 14:11:42.963559 | orchestrator | 2025-07-12 14:11:42.963565 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 14:11:42.963572 | orchestrator | Saturday 12 July 2025 14:11:08 +0000 (0:00:03.536) 0:08:01.813 ********* 2025-07-12 14:11:42.963579 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:11:42.963585 | orchestrator | 2025-07-12 14:11:42.963592 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 14:11:42.963599 | orchestrator | Saturday 12 July 2025 14:11:20 +0000 (0:00:11.614) 0:08:13.428 ********* 2025-07-12 14:11:42.963605 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:11:42.963612 | orchestrator | 2025-07-12 14:11:42.963619 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-07-12 14:11:42.963626 | orchestrator | Saturday 12 July 2025 14:11:21 +0000 (0:00:01.274) 0:08:14.702 ********* 2025-07-12 14:11:42.963632 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.963639 | orchestrator | 2025-07-12 14:11:42.963646 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-07-12 14:11:42.963653 | orchestrator | Saturday 12 July 2025 14:11:22 +0000 (0:00:01.384) 0:08:16.087 ********* 2025-07-12 14:11:42.963659 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:11:42.963666 | orchestrator | 2025-07-12 14:11:42.963673 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-07-12 14:11:42.963680 | orchestrator | Saturday 12 July 2025 14:11:33 +0000 (0:00:10.388) 0:08:26.476 ********* 2025-07-12 14:11:42.963686 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:11:42.963693 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:11:42.963706 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:11:42.963713 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:11:42.963720 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:11:42.963727 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:11:42.963734 | orchestrator | 2025-07-12 14:11:42.963740 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-07-12 14:11:42.963747 | orchestrator | 2025-07-12 14:11:42.963754 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-07-12 14:11:42.963760 | orchestrator | Saturday 12 July 2025 14:11:34 +0000 (0:00:01.723) 0:08:28.200 ********* 2025-07-12 14:11:42.963767 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:11:42.963774 | orchestrator | changed: [testbed-node-1] 2025-07-12 14:11:42.963781 | orchestrator | changed: [testbed-node-2] 2025-07-12 14:11:42.963787 | orchestrator | 2025-07-12 14:11:42.963794 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-07-12 14:11:42.963801 | orchestrator | 2025-07-12 14:11:42.963807 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-07-12 14:11:42.963814 | orchestrator | Saturday 12 July 2025 14:11:36 +0000 (0:00:01.152) 0:08:29.352 ********* 2025-07-12 14:11:42.963821 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.963827 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.963834 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.963841 | orchestrator | 2025-07-12 14:11:42.963848 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-07-12 14:11:42.963860 | orchestrator | 2025-07-12 14:11:42.963867 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-07-12 14:11:42.963873 | orchestrator | Saturday 12 July 2025 14:11:36 +0000 (0:00:00.487) 0:08:29.839 ********* 2025-07-12 14:11:42.963880 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-07-12 14:11:42.963890 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 14:11:42.963897 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 14:11:42.963904 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-07-12 14:11:42.963911 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-07-12 14:11:42.963918 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:42.963924 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:11:42.963931 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-07-12 14:11:42.963938 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 14:11:42.963944 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 14:11:42.963951 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-07-12 14:11:42.963958 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-07-12 14:11:42.963965 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:42.963971 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:11:42.963978 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-07-12 14:11:42.963985 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 14:11:42.963991 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 14:11:42.963998 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-07-12 14:11:42.964005 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-07-12 14:11:42.964011 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:42.964018 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:11:42.964025 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-07-12 14:11:42.964032 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 14:11:42.964038 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 14:11:42.964045 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-07-12 14:11:42.964052 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-07-12 14:11:42.964059 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:42.964066 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.964072 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-07-12 14:11:42.964079 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 14:11:42.964086 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 14:11:42.964092 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-07-12 14:11:42.964099 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-07-12 14:11:42.964106 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:42.964112 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.964119 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-07-12 14:11:42.964126 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 14:11:42.964133 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 14:11:42.964140 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-07-12 14:11:42.964146 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-07-12 14:11:42.964153 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-07-12 14:11:42.964180 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.964187 | orchestrator | 2025-07-12 14:11:42.964194 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-07-12 14:11:42.964201 | orchestrator | 2025-07-12 14:11:42.964208 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-07-12 14:11:42.964214 | orchestrator | Saturday 12 July 2025 14:11:37 +0000 (0:00:01.327) 0:08:31.167 ********* 2025-07-12 14:11:42.964221 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-07-12 14:11:42.964228 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-07-12 14:11:42.964235 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.964245 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-07-12 14:11:42.964252 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-07-12 14:11:42.964259 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.964265 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-07-12 14:11:42.964272 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-07-12 14:11:42.964279 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.964286 | orchestrator | 2025-07-12 14:11:42.964292 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-07-12 14:11:42.964299 | orchestrator | 2025-07-12 14:11:42.964306 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-07-12 14:11:42.964313 | orchestrator | Saturday 12 July 2025 14:11:38 +0000 (0:00:00.753) 0:08:31.920 ********* 2025-07-12 14:11:42.964319 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.964326 | orchestrator | 2025-07-12 14:11:42.964333 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-07-12 14:11:42.964339 | orchestrator | 2025-07-12 14:11:42.964346 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-07-12 14:11:42.964353 | orchestrator | Saturday 12 July 2025 14:11:39 +0000 (0:00:00.650) 0:08:32.570 ********* 2025-07-12 14:11:42.964359 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:11:42.964366 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:11:42.964373 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:11:42.964380 | orchestrator | 2025-07-12 14:11:42.964386 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:11:42.964393 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:11:42.964404 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-07-12 14:11:42.964411 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 14:11:42.964418 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 14:11:42.964425 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 14:11:42.964432 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-07-12 14:11:42.964439 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-12 14:11:42.964446 | orchestrator | 2025-07-12 14:11:42.964453 | orchestrator | 2025-07-12 14:11:42.964459 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:11:42.964466 | orchestrator | Saturday 12 July 2025 14:11:39 +0000 (0:00:00.420) 0:08:32.991 ********* 2025-07-12 14:11:42.964473 | orchestrator | =============================================================================== 2025-07-12 14:11:42.964484 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.00s 2025-07-12 14:11:42.964491 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 37.45s 2025-07-12 14:11:42.964498 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.23s 2025-07-12 14:11:42.964504 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.13s 2025-07-12 14:11:42.964511 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.27s 2025-07-12 14:11:42.964518 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.79s 2025-07-12 14:11:42.964524 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.53s 2025-07-12 14:11:42.964531 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.32s 2025-07-12 14:11:42.964538 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.70s 2025-07-12 14:11:42.964545 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.74s 2025-07-12 14:11:42.964551 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.25s 2025-07-12 14:11:42.964558 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.00s 2025-07-12 14:11:42.964565 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.61s 2025-07-12 14:11:42.964571 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.31s 2025-07-12 14:11:42.964578 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.25s 2025-07-12 14:11:42.964585 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.82s 2025-07-12 14:11:42.964591 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.39s 2025-07-12 14:11:42.964598 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.26s 2025-07-12 14:11:42.964605 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.09s 2025-07-12 14:11:42.964611 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.33s 2025-07-12 14:11:42.964618 | orchestrator | 2025-07-12 14:11:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:11:45.988892 | orchestrator | 2025-07-12 14:11:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:11:49.042126 | orchestrator | 2025-07-12 14:11:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:11:52.084381 | orchestrator | 2025-07-12 14:11:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:11:55.125884 | orchestrator | 2025-07-12 14:11:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:11:58.171852 | orchestrator | 2025-07-12 14:11:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:01.224524 | orchestrator | 2025-07-12 14:12:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:04.269770 | orchestrator | 2025-07-12 14:12:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:07.310294 | orchestrator | 2025-07-12 14:12:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:10.356996 | orchestrator | 2025-07-12 14:12:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:13.406541 | orchestrator | 2025-07-12 14:12:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:16.458520 | orchestrator | 2025-07-12 14:12:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:19.503449 | orchestrator | 2025-07-12 14:12:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:22.549465 | orchestrator | 2025-07-12 14:12:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:25.598894 | orchestrator | 2025-07-12 14:12:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:28.639367 | orchestrator | 2025-07-12 14:12:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:31.679380 | orchestrator | 2025-07-12 14:12:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:34.722212 | orchestrator | 2025-07-12 14:12:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:37.765140 | orchestrator | 2025-07-12 14:12:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:40.806086 | orchestrator | 2025-07-12 14:12:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 14:12:43.855010 | orchestrator | 2025-07-12 14:12:44.155715 | orchestrator | 2025-07-12 14:12:44.161087 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jul 12 14:12:44 UTC 2025 2025-07-12 14:12:44.161215 | orchestrator | 2025-07-12 14:12:44.463439 | orchestrator | ok: Runtime: 0:36:47.817207 2025-07-12 14:12:44.695368 | 2025-07-12 14:12:44.695519 | TASK [Bootstrap services] 2025-07-12 14:12:45.447797 | orchestrator | 2025-07-12 14:12:45.447991 | orchestrator | # BOOTSTRAP 2025-07-12 14:12:45.448015 | orchestrator | 2025-07-12 14:12:45.448029 | orchestrator | + set -e 2025-07-12 14:12:45.448043 | orchestrator | + echo 2025-07-12 14:12:45.448059 | orchestrator | + echo '# BOOTSTRAP' 2025-07-12 14:12:45.448077 | orchestrator | + echo 2025-07-12 14:12:45.448122 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-07-12 14:12:45.454186 | orchestrator | + set -e 2025-07-12 14:12:45.454441 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-07-12 14:12:49.601501 | orchestrator | 2025-07-12 14:12:49 | INFO  | It takes a moment until task ff78d66f-1d66-4d5f-b4e3-c6499f34d20e (flavor-manager) has been started and output is visible here. 2025-07-12 14:12:57.676933 | orchestrator | 2025-07-12 14:12:53 | INFO  | Flavor SCS-1V-4 created 2025-07-12 14:12:57.677063 | orchestrator | 2025-07-12 14:12:53 | INFO  | Flavor SCS-2V-8 created 2025-07-12 14:12:57.677083 | orchestrator | 2025-07-12 14:12:54 | INFO  | Flavor SCS-4V-16 created 2025-07-12 14:12:57.677096 | orchestrator | 2025-07-12 14:12:54 | INFO  | Flavor SCS-8V-32 created 2025-07-12 14:12:57.677108 | orchestrator | 2025-07-12 14:12:54 | INFO  | Flavor SCS-1V-2 created 2025-07-12 14:12:57.677120 | orchestrator | 2025-07-12 14:12:54 | INFO  | Flavor SCS-2V-4 created 2025-07-12 14:12:57.677132 | orchestrator | 2025-07-12 14:12:54 | INFO  | Flavor SCS-4V-8 created 2025-07-12 14:12:57.677193 | orchestrator | 2025-07-12 14:12:54 | INFO  | Flavor SCS-8V-16 created 2025-07-12 14:12:57.677222 | orchestrator | 2025-07-12 14:12:54 | INFO  | Flavor SCS-16V-32 created 2025-07-12 14:12:57.677235 | orchestrator | 2025-07-12 14:12:54 | INFO  | Flavor SCS-1V-8 created 2025-07-12 14:12:57.677247 | orchestrator | 2025-07-12 14:12:55 | INFO  | Flavor SCS-2V-16 created 2025-07-12 14:12:57.677258 | orchestrator | 2025-07-12 14:12:55 | INFO  | Flavor SCS-4V-32 created 2025-07-12 14:12:57.677283 | orchestrator | 2025-07-12 14:12:55 | INFO  | Flavor SCS-1L-1 created 2025-07-12 14:12:57.677308 | orchestrator | 2025-07-12 14:12:55 | INFO  | Flavor SCS-2V-4-20s created 2025-07-12 14:12:57.677328 | orchestrator | 2025-07-12 14:12:55 | INFO  | Flavor SCS-4V-16-100s created 2025-07-12 14:12:57.677348 | orchestrator | 2025-07-12 14:12:55 | INFO  | Flavor SCS-1V-4-10 created 2025-07-12 14:12:57.677368 | orchestrator | 2025-07-12 14:12:55 | INFO  | Flavor SCS-2V-8-20 created 2025-07-12 14:12:57.677384 | orchestrator | 2025-07-12 14:12:56 | INFO  | Flavor SCS-4V-16-50 created 2025-07-12 14:12:57.677395 | orchestrator | 2025-07-12 14:12:56 | INFO  | Flavor SCS-8V-32-100 created 2025-07-12 14:12:57.677407 | orchestrator | 2025-07-12 14:12:56 | INFO  | Flavor SCS-1V-2-5 created 2025-07-12 14:12:57.677418 | orchestrator | 2025-07-12 14:12:56 | INFO  | Flavor SCS-2V-4-10 created 2025-07-12 14:12:57.677430 | orchestrator | 2025-07-12 14:12:56 | INFO  | Flavor SCS-4V-8-20 created 2025-07-12 14:12:57.677442 | orchestrator | 2025-07-12 14:12:56 | INFO  | Flavor SCS-8V-16-50 created 2025-07-12 14:12:57.677454 | orchestrator | 2025-07-12 14:12:56 | INFO  | Flavor SCS-16V-32-100 created 2025-07-12 14:12:57.677465 | orchestrator | 2025-07-12 14:12:57 | INFO  | Flavor SCS-1V-8-20 created 2025-07-12 14:12:57.677477 | orchestrator | 2025-07-12 14:12:57 | INFO  | Flavor SCS-2V-16-50 created 2025-07-12 14:12:57.677488 | orchestrator | 2025-07-12 14:12:57 | INFO  | Flavor SCS-4V-32-100 created 2025-07-12 14:12:57.677500 | orchestrator | 2025-07-12 14:12:57 | INFO  | Flavor SCS-1L-1-5 created 2025-07-12 14:12:59.789807 | orchestrator | 2025-07-12 14:12:59 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-07-12 14:13:09.862927 | orchestrator | 2025-07-12 14:13:09 | INFO  | Task ff7b58be-2997-4f3d-b3c4-f927ff4d6f07 (bootstrap-basic) was prepared for execution. 2025-07-12 14:13:09.863127 | orchestrator | 2025-07-12 14:13:09 | INFO  | It takes a moment until task ff7b58be-2997-4f3d-b3c4-f927ff4d6f07 (bootstrap-basic) has been started and output is visible here. 2025-07-12 14:14:13.217338 | orchestrator | 2025-07-12 14:14:13.217493 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-07-12 14:14:13.217525 | orchestrator | 2025-07-12 14:14:13.217546 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 14:14:13.217565 | orchestrator | Saturday 12 July 2025 14:13:13 +0000 (0:00:00.077) 0:00:00.077 ********* 2025-07-12 14:14:13.217585 | orchestrator | ok: [localhost] 2025-07-12 14:14:13.217606 | orchestrator | 2025-07-12 14:14:13.217628 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-07-12 14:14:13.217652 | orchestrator | Saturday 12 July 2025 14:13:15 +0000 (0:00:01.859) 0:00:01.936 ********* 2025-07-12 14:14:13.217672 | orchestrator | ok: [localhost] 2025-07-12 14:14:13.217691 | orchestrator | 2025-07-12 14:14:13.217711 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-07-12 14:14:13.217729 | orchestrator | Saturday 12 July 2025 14:13:23 +0000 (0:00:07.886) 0:00:09.822 ********* 2025-07-12 14:14:13.217750 | orchestrator | changed: [localhost] 2025-07-12 14:14:13.217771 | orchestrator | 2025-07-12 14:14:13.217791 | orchestrator | TASK [Get volume type local] *************************************************** 2025-07-12 14:14:13.217826 | orchestrator | Saturday 12 July 2025 14:13:30 +0000 (0:00:07.221) 0:00:17.044 ********* 2025-07-12 14:14:13.217847 | orchestrator | ok: [localhost] 2025-07-12 14:14:13.217867 | orchestrator | 2025-07-12 14:14:13.217888 | orchestrator | TASK [Create volume type local] ************************************************ 2025-07-12 14:14:13.217908 | orchestrator | Saturday 12 July 2025 14:13:38 +0000 (0:00:07.462) 0:00:24.507 ********* 2025-07-12 14:14:13.217929 | orchestrator | changed: [localhost] 2025-07-12 14:14:13.217954 | orchestrator | 2025-07-12 14:14:13.217975 | orchestrator | TASK [Create public network] *************************************************** 2025-07-12 14:14:13.217995 | orchestrator | Saturday 12 July 2025 14:13:45 +0000 (0:00:07.325) 0:00:31.832 ********* 2025-07-12 14:14:13.218087 | orchestrator | changed: [localhost] 2025-07-12 14:14:13.218103 | orchestrator | 2025-07-12 14:14:13.218116 | orchestrator | TASK [Set public network to default] ******************************************* 2025-07-12 14:14:13.218160 | orchestrator | Saturday 12 July 2025 14:13:52 +0000 (0:00:07.061) 0:00:38.894 ********* 2025-07-12 14:14:13.218177 | orchestrator | changed: [localhost] 2025-07-12 14:14:13.218207 | orchestrator | 2025-07-12 14:14:13.218242 | orchestrator | TASK [Create public subnet] **************************************************** 2025-07-12 14:14:13.218262 | orchestrator | Saturday 12 July 2025 14:14:00 +0000 (0:00:07.596) 0:00:46.490 ********* 2025-07-12 14:14:13.218281 | orchestrator | changed: [localhost] 2025-07-12 14:14:13.218301 | orchestrator | 2025-07-12 14:14:13.218319 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-07-12 14:14:13.218337 | orchestrator | Saturday 12 July 2025 14:14:05 +0000 (0:00:04.747) 0:00:51.238 ********* 2025-07-12 14:14:13.218355 | orchestrator | changed: [localhost] 2025-07-12 14:14:13.218372 | orchestrator | 2025-07-12 14:14:13.218391 | orchestrator | TASK [Create manager role] ***************************************************** 2025-07-12 14:14:13.218411 | orchestrator | Saturday 12 July 2025 14:14:09 +0000 (0:00:04.380) 0:00:55.619 ********* 2025-07-12 14:14:13.218431 | orchestrator | ok: [localhost] 2025-07-12 14:14:13.218451 | orchestrator | 2025-07-12 14:14:13.218466 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:14:13.218479 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:14:13.218492 | orchestrator | 2025-07-12 14:14:13.218503 | orchestrator | 2025-07-12 14:14:13.218515 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:14:13.218526 | orchestrator | Saturday 12 July 2025 14:14:12 +0000 (0:00:03.565) 0:00:59.184 ********* 2025-07-12 14:14:13.218564 | orchestrator | =============================================================================== 2025-07-12 14:14:13.218576 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.89s 2025-07-12 14:14:13.218588 | orchestrator | Set public network to default ------------------------------------------- 7.60s 2025-07-12 14:14:13.218599 | orchestrator | Get volume type local --------------------------------------------------- 7.46s 2025-07-12 14:14:13.218611 | orchestrator | Create volume type local ------------------------------------------------ 7.33s 2025-07-12 14:14:13.218622 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.22s 2025-07-12 14:14:13.218634 | orchestrator | Create public network --------------------------------------------------- 7.06s 2025-07-12 14:14:13.218645 | orchestrator | Create public subnet ---------------------------------------------------- 4.75s 2025-07-12 14:14:13.218656 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.38s 2025-07-12 14:14:13.218667 | orchestrator | Create manager role ----------------------------------------------------- 3.57s 2025-07-12 14:14:13.218679 | orchestrator | Gathering Facts --------------------------------------------------------- 1.86s 2025-07-12 14:14:15.477773 | orchestrator | 2025-07-12 14:14:15 | INFO  | It takes a moment until task 31aee0ed-b96a-4f69-b122-d1518db07ff9 (image-manager) has been started and output is visible here. 2025-07-12 14:14:56.479867 | orchestrator | 2025-07-12 14:14:18 | INFO  | Processing image 'Cirros 0.6.2' 2025-07-12 14:14:56.480010 | orchestrator | 2025-07-12 14:14:19 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-07-12 14:14:56.480033 | orchestrator | 2025-07-12 14:14:19 | INFO  | Importing image Cirros 0.6.2 2025-07-12 14:14:56.480046 | orchestrator | 2025-07-12 14:14:19 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 14:14:56.480060 | orchestrator | 2025-07-12 14:14:20 | INFO  | Waiting for image to leave queued state... 2025-07-12 14:14:56.480073 | orchestrator | 2025-07-12 14:14:22 | INFO  | Waiting for import to complete... 2025-07-12 14:14:56.480085 | orchestrator | 2025-07-12 14:14:33 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-07-12 14:14:56.480097 | orchestrator | 2025-07-12 14:14:33 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-07-12 14:14:56.480109 | orchestrator | 2025-07-12 14:14:33 | INFO  | Setting internal_version = 0.6.2 2025-07-12 14:14:56.480121 | orchestrator | 2025-07-12 14:14:33 | INFO  | Setting image_original_user = cirros 2025-07-12 14:14:56.480133 | orchestrator | 2025-07-12 14:14:33 | INFO  | Adding tag os:cirros 2025-07-12 14:14:56.480145 | orchestrator | 2025-07-12 14:14:33 | INFO  | Setting property architecture: x86_64 2025-07-12 14:14:56.480157 | orchestrator | 2025-07-12 14:14:33 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 14:14:56.480169 | orchestrator | 2025-07-12 14:14:34 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 14:14:56.480180 | orchestrator | 2025-07-12 14:14:34 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 14:14:56.480192 | orchestrator | 2025-07-12 14:14:34 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 14:14:56.480204 | orchestrator | 2025-07-12 14:14:34 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 14:14:56.480268 | orchestrator | 2025-07-12 14:14:34 | INFO  | Setting property os_distro: cirros 2025-07-12 14:14:56.480281 | orchestrator | 2025-07-12 14:14:35 | INFO  | Setting property replace_frequency: never 2025-07-12 14:14:56.480292 | orchestrator | 2025-07-12 14:14:35 | INFO  | Setting property uuid_validity: none 2025-07-12 14:14:56.480304 | orchestrator | 2025-07-12 14:14:35 | INFO  | Setting property provided_until: none 2025-07-12 14:14:56.480341 | orchestrator | 2025-07-12 14:14:35 | INFO  | Setting property image_description: Cirros 2025-07-12 14:14:56.480362 | orchestrator | 2025-07-12 14:14:35 | INFO  | Setting property image_name: Cirros 2025-07-12 14:14:56.480376 | orchestrator | 2025-07-12 14:14:36 | INFO  | Setting property internal_version: 0.6.2 2025-07-12 14:14:56.480394 | orchestrator | 2025-07-12 14:14:36 | INFO  | Setting property image_original_user: cirros 2025-07-12 14:14:56.480407 | orchestrator | 2025-07-12 14:14:36 | INFO  | Setting property os_version: 0.6.2 2025-07-12 14:14:56.480420 | orchestrator | 2025-07-12 14:14:36 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 14:14:56.480434 | orchestrator | 2025-07-12 14:14:37 | INFO  | Setting property image_build_date: 2023-05-30 2025-07-12 14:14:56.480446 | orchestrator | 2025-07-12 14:14:37 | INFO  | Checking status of 'Cirros 0.6.2' 2025-07-12 14:14:56.480458 | orchestrator | 2025-07-12 14:14:37 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-07-12 14:14:56.480470 | orchestrator | 2025-07-12 14:14:37 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-07-12 14:14:56.480483 | orchestrator | 2025-07-12 14:14:37 | INFO  | Processing image 'Cirros 0.6.3' 2025-07-12 14:14:56.480495 | orchestrator | 2025-07-12 14:14:37 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-07-12 14:14:56.480508 | orchestrator | 2025-07-12 14:14:37 | INFO  | Importing image Cirros 0.6.3 2025-07-12 14:14:56.480520 | orchestrator | 2025-07-12 14:14:37 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 14:14:56.480533 | orchestrator | 2025-07-12 14:14:38 | INFO  | Waiting for image to leave queued state... 2025-07-12 14:14:56.480545 | orchestrator | 2025-07-12 14:14:40 | INFO  | Waiting for import to complete... 2025-07-12 14:14:56.480558 | orchestrator | 2025-07-12 14:14:51 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-07-12 14:14:56.480589 | orchestrator | 2025-07-12 14:14:51 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-07-12 14:14:56.480603 | orchestrator | 2025-07-12 14:14:51 | INFO  | Setting internal_version = 0.6.3 2025-07-12 14:14:56.480615 | orchestrator | 2025-07-12 14:14:51 | INFO  | Setting image_original_user = cirros 2025-07-12 14:14:56.480628 | orchestrator | 2025-07-12 14:14:51 | INFO  | Adding tag os:cirros 2025-07-12 14:14:56.480641 | orchestrator | 2025-07-12 14:14:51 | INFO  | Setting property architecture: x86_64 2025-07-12 14:14:56.480653 | orchestrator | 2025-07-12 14:14:51 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 14:14:56.480665 | orchestrator | 2025-07-12 14:14:51 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 14:14:56.480678 | orchestrator | 2025-07-12 14:14:52 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 14:14:56.480691 | orchestrator | 2025-07-12 14:14:52 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 14:14:56.480702 | orchestrator | 2025-07-12 14:14:52 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 14:14:56.480714 | orchestrator | 2025-07-12 14:14:52 | INFO  | Setting property os_distro: cirros 2025-07-12 14:14:56.480725 | orchestrator | 2025-07-12 14:14:52 | INFO  | Setting property replace_frequency: never 2025-07-12 14:14:56.480737 | orchestrator | 2025-07-12 14:14:53 | INFO  | Setting property uuid_validity: none 2025-07-12 14:14:56.480757 | orchestrator | 2025-07-12 14:14:53 | INFO  | Setting property provided_until: none 2025-07-12 14:14:56.480769 | orchestrator | 2025-07-12 14:14:53 | INFO  | Setting property image_description: Cirros 2025-07-12 14:14:56.480781 | orchestrator | 2025-07-12 14:14:54 | INFO  | Setting property image_name: Cirros 2025-07-12 14:14:56.480792 | orchestrator | 2025-07-12 14:14:54 | INFO  | Setting property internal_version: 0.6.3 2025-07-12 14:14:56.480804 | orchestrator | 2025-07-12 14:14:54 | INFO  | Setting property image_original_user: cirros 2025-07-12 14:14:56.480815 | orchestrator | 2025-07-12 14:14:55 | INFO  | Setting property os_version: 0.6.3 2025-07-12 14:14:56.480827 | orchestrator | 2025-07-12 14:14:55 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 14:14:56.480839 | orchestrator | 2025-07-12 14:14:55 | INFO  | Setting property image_build_date: 2024-09-26 2025-07-12 14:14:56.480850 | orchestrator | 2025-07-12 14:14:55 | INFO  | Checking status of 'Cirros 0.6.3' 2025-07-12 14:14:56.480862 | orchestrator | 2025-07-12 14:14:55 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-07-12 14:14:56.480878 | orchestrator | 2025-07-12 14:14:55 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-07-12 14:14:56.882986 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-07-12 14:14:59.050857 | orchestrator | 2025-07-12 14:14:59 | INFO  | date: 2025-07-12 2025-07-12 14:14:59.050958 | orchestrator | 2025-07-12 14:14:59 | INFO  | image: octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 14:14:59.050975 | orchestrator | 2025-07-12 14:14:59 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 14:14:59.051012 | orchestrator | 2025-07-12 14:14:59 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2.CHECKSUM 2025-07-12 14:14:59.177279 | orchestrator | 2025-07-12 14:14:59 | INFO  | checksum: c95855ae58dddb977df0d8e11b851fc66dd0abac9e608812e6020c0a95df8f26 2025-07-12 14:14:59.278606 | orchestrator | 2025-07-12 14:14:59 | INFO  | It takes a moment until task 03995155-1f0c-4313-9106-c067fc0ce858 (image-manager) has been started and output is visible here. 2025-07-12 14:16:00.101980 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-07-12 14:16:00.102215 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-07-12 14:16:00.102246 | orchestrator | 2025-07-12 14:15:01 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 14:16:00.102274 | orchestrator | 2025-07-12 14:15:01 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2: 200 2025-07-12 14:16:00.102298 | orchestrator | 2025-07-12 14:15:01 | INFO  | Importing image OpenStack Octavia Amphora 2025-07-12 2025-07-12 14:16:00.102321 | orchestrator | 2025-07-12 14:15:01 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 14:16:00.102374 | orchestrator | 2025-07-12 14:15:02 | INFO  | Waiting for image to leave queued state... 2025-07-12 14:16:00.102430 | orchestrator | 2025-07-12 14:15:04 | INFO  | Waiting for import to complete... 2025-07-12 14:16:00.102452 | orchestrator | 2025-07-12 14:15:14 | INFO  | Waiting for import to complete... 2025-07-12 14:16:00.102473 | orchestrator | 2025-07-12 14:15:24 | INFO  | Waiting for import to complete... 2025-07-12 14:16:00.102495 | orchestrator | 2025-07-12 14:15:35 | INFO  | Waiting for import to complete... 2025-07-12 14:16:00.102518 | orchestrator | 2025-07-12 14:15:45 | INFO  | Waiting for import to complete... 2025-07-12 14:16:00.102541 | orchestrator | 2025-07-12 14:15:55 | INFO  | Import of 'OpenStack Octavia Amphora 2025-07-12' successfully completed, reloading images 2025-07-12 14:16:00.102564 | orchestrator | 2025-07-12 14:15:55 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 14:16:00.102584 | orchestrator | 2025-07-12 14:15:55 | INFO  | Setting internal_version = 2025-07-12 2025-07-12 14:16:00.102604 | orchestrator | 2025-07-12 14:15:55 | INFO  | Setting image_original_user = ubuntu 2025-07-12 14:16:00.102625 | orchestrator | 2025-07-12 14:15:55 | INFO  | Adding tag amphora 2025-07-12 14:16:00.102645 | orchestrator | 2025-07-12 14:15:56 | INFO  | Adding tag os:ubuntu 2025-07-12 14:16:00.102666 | orchestrator | 2025-07-12 14:15:56 | INFO  | Setting property architecture: x86_64 2025-07-12 14:16:00.102685 | orchestrator | 2025-07-12 14:15:56 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 14:16:00.102706 | orchestrator | 2025-07-12 14:15:56 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 14:16:00.102740 | orchestrator | 2025-07-12 14:15:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 14:16:00.102758 | orchestrator | 2025-07-12 14:15:57 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 14:16:00.102775 | orchestrator | 2025-07-12 14:15:57 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 14:16:00.102793 | orchestrator | 2025-07-12 14:15:57 | INFO  | Setting property os_distro: ubuntu 2025-07-12 14:16:00.102811 | orchestrator | 2025-07-12 14:15:57 | INFO  | Setting property replace_frequency: quarterly 2025-07-12 14:16:00.102829 | orchestrator | 2025-07-12 14:15:57 | INFO  | Setting property uuid_validity: last-1 2025-07-12 14:16:00.102847 | orchestrator | 2025-07-12 14:15:58 | INFO  | Setting property provided_until: none 2025-07-12 14:16:00.102865 | orchestrator | 2025-07-12 14:15:58 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-07-12 14:16:00.102883 | orchestrator | 2025-07-12 14:15:58 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-07-12 14:16:00.102900 | orchestrator | 2025-07-12 14:15:58 | INFO  | Setting property internal_version: 2025-07-12 2025-07-12 14:16:00.102918 | orchestrator | 2025-07-12 14:15:59 | INFO  | Setting property image_original_user: ubuntu 2025-07-12 14:16:00.102935 | orchestrator | 2025-07-12 14:15:59 | INFO  | Setting property os_version: 2025-07-12 2025-07-12 14:16:00.102954 | orchestrator | 2025-07-12 14:15:59 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 14:16:00.103000 | orchestrator | 2025-07-12 14:15:59 | INFO  | Setting property image_build_date: 2025-07-12 2025-07-12 14:16:00.103019 | orchestrator | 2025-07-12 14:15:59 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 14:16:00.103036 | orchestrator | 2025-07-12 14:15:59 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 14:16:00.103069 | orchestrator | 2025-07-12 14:15:59 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-07-12 14:16:00.103089 | orchestrator | 2025-07-12 14:15:59 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-07-12 14:16:00.103109 | orchestrator | 2025-07-12 14:15:59 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-07-12 14:16:00.103129 | orchestrator | 2025-07-12 14:15:59 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-07-12 14:16:00.864895 | orchestrator | ok: Runtime: 0:03:15.361265 2025-07-12 14:16:00.926190 | 2025-07-12 14:16:00.926311 | TASK [Run checks] 2025-07-12 14:16:01.660105 | orchestrator | + set -e 2025-07-12 14:16:01.660387 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 14:16:01.660417 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 14:16:01.660438 | orchestrator | ++ INTERACTIVE=false 2025-07-12 14:16:01.660453 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 14:16:01.660465 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 14:16:01.660480 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 14:16:01.661279 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 14:16:01.669686 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 14:16:01.669756 | orchestrator | 2025-07-12 14:16:01.669772 | orchestrator | # CHECK 2025-07-12 14:16:01.669784 | orchestrator | 2025-07-12 14:16:01.669795 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 14:16:01.669822 | orchestrator | + echo 2025-07-12 14:16:01.669845 | orchestrator | + echo '# CHECK' 2025-07-12 14:16:01.669856 | orchestrator | + echo 2025-07-12 14:16:01.669871 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 14:16:01.670596 | orchestrator | ++ semver latest 5.0.0 2025-07-12 14:16:01.734471 | orchestrator | 2025-07-12 14:16:01.734544 | orchestrator | ## Containers @ testbed-manager 2025-07-12 14:16:01.734557 | orchestrator | 2025-07-12 14:16:01.734571 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 14:16:01.734583 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 14:16:01.734609 | orchestrator | + echo 2025-07-12 14:16:01.734621 | orchestrator | + echo '## Containers @ testbed-manager' 2025-07-12 14:16:01.734633 | orchestrator | + echo 2025-07-12 14:16:01.734644 | orchestrator | + osism container testbed-manager ps 2025-07-12 14:16:04.013264 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 14:16:04.013429 | orchestrator | 04e73183ac9c registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-07-12 14:16:04.013458 | orchestrator | 9c98dee9e9a7 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-07-12 14:16:04.013470 | orchestrator | 44d3565565f4 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-07-12 14:16:04.013489 | orchestrator | 23aab3c43b39 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-12 14:16:04.013501 | orchestrator | da09975a1bac registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-07-12 14:16:04.013518 | orchestrator | 1601a2eeeec0 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 17 minutes cephclient 2025-07-12 14:16:04.013530 | orchestrator | b7a74d8970cc registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-12 14:16:04.013542 | orchestrator | ba0ed1b8d8d1 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-07-12 14:16:04.013554 | orchestrator | 3f08b84393b0 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-07-12 14:16:04.013595 | orchestrator | 4923d47df108 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-07-12 14:16:04.013607 | orchestrator | 5c86affe4650 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2025-07-12 14:16:04.013618 | orchestrator | 6b460c1cf793 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 33 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-07-12 14:16:04.013629 | orchestrator | e98c7a06e07c registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-07-12 14:16:04.013640 | orchestrator | c07a51c02c47 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 40 minutes (healthy) manager-inventory_reconciler-1 2025-07-12 14:16:04.013651 | orchestrator | 733c3be04271 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-ansible 2025-07-12 14:16:04.013684 | orchestrator | b6d925a6518b registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) ceph-ansible 2025-07-12 14:16:04.013702 | orchestrator | 9d3822359a27 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) kolla-ansible 2025-07-12 14:16:04.013714 | orchestrator | 81b1ea96be4a registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-kubernetes 2025-07-12 14:16:04.013725 | orchestrator | 81dc26da8e41 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" About an hour ago Up 41 minutes (healthy) 8000/tcp manager-ara-server-1 2025-07-12 14:16:04.013736 | orchestrator | d65df465bf06 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-openstack-1 2025-07-12 14:16:04.013747 | orchestrator | d42474335afc registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-beat-1 2025-07-12 14:16:04.013758 | orchestrator | dfdc151ce647 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 6379/tcp manager-redis-1 2025-07-12 14:16:04.013770 | orchestrator | 3434fed5e7bd registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" About an hour ago Up 41 minutes (healthy) 3306/tcp manager-mariadb-1 2025-07-12 14:16:04.013789 | orchestrator | 9ef2aa463e03 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-flower-1 2025-07-12 14:16:04.013800 | orchestrator | 696a694f22bc registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) manager-listener-1 2025-07-12 14:16:04.013811 | orchestrator | d2925e12b9d3 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 41 minutes (healthy) osismclient 2025-07-12 14:16:04.013822 | orchestrator | d23fb69bf921 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 41 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-07-12 14:16:04.013833 | orchestrator | 56ff1e6ef1aa registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-07-12 14:16:04.315405 | orchestrator | 2025-07-12 14:16:04.315511 | orchestrator | ## Images @ testbed-manager 2025-07-12 14:16:04.315526 | orchestrator | 2025-07-12 14:16:04.315539 | orchestrator | + echo 2025-07-12 14:16:04.315551 | orchestrator | + echo '## Images @ testbed-manager' 2025-07-12 14:16:04.315562 | orchestrator | + echo 2025-07-12 14:16:04.315574 | orchestrator | + osism container testbed-manager images 2025-07-12 14:16:06.473918 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 14:16:06.474082 | orchestrator | registry.osism.tech/osism/osism-ansible latest 1ab605c61d0a 2 hours ago 575MB 2025-07-12 14:16:06.474104 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d2fcb41febbc 11 hours ago 11.5MB 2025-07-12 14:16:06.474116 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 751f5a3be689 11 hours ago 234MB 2025-07-12 14:16:06.474127 | orchestrator | registry.osism.tech/osism/cephclient reef 6e86f0318c12 11 hours ago 453MB 2025-07-12 14:16:06.474164 | orchestrator | registry.osism.tech/kolla/cron 2024.2 4ce8240a893c 13 hours ago 318MB 2025-07-12 14:16:06.474176 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f6a8ddc0fa19 13 hours ago 746MB 2025-07-12 14:16:06.474187 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cb87a0b5a431 13 hours ago 628MB 2025-07-12 14:16:06.474198 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 7007172fb408 13 hours ago 410MB 2025-07-12 14:16:06.474209 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7743da2fe9b2 13 hours ago 358MB 2025-07-12 14:16:06.474220 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 e582fc7c3e8e 13 hours ago 891MB 2025-07-12 14:16:06.474231 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 6a16161bc0ba 13 hours ago 456MB 2025-07-12 14:16:06.474241 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 8f186821a09b 13 hours ago 360MB 2025-07-12 14:16:06.474252 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 30b94beeef83 14 hours ago 535MB 2025-07-12 14:16:06.474263 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest db2d89ab0928 14 hours ago 1.21GB 2025-07-12 14:16:06.474274 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 8829472f7c53 14 hours ago 571MB 2025-07-12 14:16:06.474284 | orchestrator | registry.osism.tech/osism/osism latest c4671b5d05ab 14 hours ago 311MB 2025-07-12 14:16:06.474319 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest c8091be898ad 14 hours ago 308MB 2025-07-12 14:16:06.474331 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 weeks ago 226MB 2025-07-12 14:16:06.474342 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 7fb85a4198e9 4 weeks ago 329MB 2025-07-12 14:16:06.474376 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 6 weeks ago 41.4MB 2025-07-12 14:16:06.474387 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 5 months ago 571MB 2025-07-12 14:16:06.474398 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 10 months ago 300MB 2025-07-12 14:16:06.474409 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 13 months ago 146MB 2025-07-12 14:16:06.779755 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 14:16:06.780114 | orchestrator | ++ semver latest 5.0.0 2025-07-12 14:16:06.844493 | orchestrator | 2025-07-12 14:16:06.844589 | orchestrator | ## Containers @ testbed-node-0 2025-07-12 14:16:06.844607 | orchestrator | 2025-07-12 14:16:06.844619 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 14:16:06.844631 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 14:16:06.844641 | orchestrator | + echo 2025-07-12 14:16:06.844653 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-07-12 14:16:06.844664 | orchestrator | + echo 2025-07-12 14:16:06.844675 | orchestrator | + osism container testbed-node-0 ps 2025-07-12 14:16:09.081372 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 14:16:09.081483 | orchestrator | 3f0bd30057e2 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-12 14:16:09.081499 | orchestrator | 807c253f9f75 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-12 14:16:09.081511 | orchestrator | c4765676322f registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_api 2025-07-12 14:16:09.081522 | orchestrator | 124c8e306c50 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 14:16:09.081533 | orchestrator | 6a2cea91d0d6 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-07-12 14:16:09.081544 | orchestrator | 068c003b1541 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-07-12 14:16:09.081555 | orchestrator | 56004aa80fae registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-07-12 14:16:09.081588 | orchestrator | 84dcbb29855e registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2025-07-12 14:16:09.081600 | orchestrator | ee96cede15d9 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-07-12 14:16:09.081611 | orchestrator | 1bca9d8c0372 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-07-12 14:16:09.081622 | orchestrator | 277e02f631d2 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-07-12 14:16:09.081660 | orchestrator | ffe0007a6b8d registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-07-12 14:16:09.081672 | orchestrator | 1b6ec1aafa91 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-12 14:16:09.081683 | orchestrator | 42873d3c526e registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-07-12 14:16:09.081694 | orchestrator | 552405a8c7c2 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-07-12 14:16:09.081705 | orchestrator | fd6607d6f59f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-12 14:16:09.081715 | orchestrator | 54dc038cd664 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-12 14:16:09.081726 | orchestrator | f325dfc33a7d registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-07-12 14:16:09.081737 | orchestrator | b9f626bb62fd registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-07-12 14:16:09.081748 | orchestrator | 51873d7b1e9b registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-07-12 14:16:09.081759 | orchestrator | 7bd66f3d5e3a registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-07-12 14:16:09.081790 | orchestrator | bef60ed46a32 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-07-12 14:16:09.081802 | orchestrator | 81498e240278 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-07-12 14:16:09.081813 | orchestrator | 912cce156f53 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-12 14:16:09.081823 | orchestrator | 7d533509d681 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-12 14:16:09.081834 | orchestrator | ae0ddfa7540a registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-07-12 14:16:09.081850 | orchestrator | 5cfd258f062d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-07-12 14:16:09.081861 | orchestrator | 3506d4356816 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) keystone 2025-07-12 14:16:09.081877 | orchestrator | c086433b2918 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-12 14:16:09.081888 | orchestrator | f827554e3dd3 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-12 14:16:09.081899 | orchestrator | 43d76ffa9f8d registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-12 14:16:09.081918 | orchestrator | 217a27428d48 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-12 14:16:09.081929 | orchestrator | b01bbdcdd684 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-07-12 14:16:09.081939 | orchestrator | 08a50db281e0 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-07-12 14:16:09.081950 | orchestrator | f07b01d31cd5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-07-12 14:16:09.081961 | orchestrator | ebb842cf8980 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-12 14:16:09.081971 | orchestrator | 5200f302c262 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-12 14:16:09.081982 | orchestrator | 0270af0e5ad1 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-12 14:16:09.081993 | orchestrator | c1fcc95d8c2b registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-07-12 14:16:09.082004 | orchestrator | 2ad5c796fa11 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-07-12 14:16:09.082056 | orchestrator | 766b9ebc5e9f registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-12 14:16:09.082070 | orchestrator | 34d417765272 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-07-12 14:16:09.082081 | orchestrator | a60231b54f03 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-12 14:16:09.082092 | orchestrator | 07205b941df4 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-07-12 14:16:09.082111 | orchestrator | d62cb312e3a0 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-12 14:16:09.082122 | orchestrator | 7d5f76876812 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-07-12 14:16:09.082133 | orchestrator | 5f37bdab7749 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-12 14:16:09.082144 | orchestrator | 5af492082e8e registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-12 14:16:09.082155 | orchestrator | 79535e4fc6ab registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-12 14:16:09.082165 | orchestrator | 16a49dfbc56f registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-12 14:16:09.082176 | orchestrator | 2c14a772c991 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-12 14:16:09.082199 | orchestrator | a5c6ce2895ba registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 14:16:09.383574 | orchestrator | 2025-07-12 14:16:09.383667 | orchestrator | ## Images @ testbed-node-0 2025-07-12 14:16:09.383687 | orchestrator | 2025-07-12 14:16:09.383699 | orchestrator | + echo 2025-07-12 14:16:09.383711 | orchestrator | + echo '## Images @ testbed-node-0' 2025-07-12 14:16:09.383723 | orchestrator | + echo 2025-07-12 14:16:09.383735 | orchestrator | + osism container testbed-node-0 images 2025-07-12 14:16:11.597408 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 14:16:11.597542 | orchestrator | registry.osism.tech/osism/ceph-daemon reef fe9c699108e1 11 hours ago 1.27GB 2025-07-12 14:16:11.597618 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 da9bab98f1c4 13 hours ago 1.01GB 2025-07-12 14:16:11.597633 | orchestrator | registry.osism.tech/kolla/cron 2024.2 4ce8240a893c 13 hours ago 318MB 2025-07-12 14:16:11.597645 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f19504b04274 13 hours ago 318MB 2025-07-12 14:16:11.597656 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 ea215f3799eb 13 hours ago 375MB 2025-07-12 14:16:11.597667 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f6a8ddc0fa19 13 hours ago 746MB 2025-07-12 14:16:11.597678 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 db9179df457c 13 hours ago 417MB 2025-07-12 14:16:11.597689 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cb87a0b5a431 13 hours ago 628MB 2025-07-12 14:16:11.597700 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2ee2aea4ecbb 13 hours ago 329MB 2025-07-12 14:16:11.597711 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 ec7afc7181a3 13 hours ago 326MB 2025-07-12 14:16:11.597722 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 9a6d9feb60b1 13 hours ago 1.55GB 2025-07-12 14:16:11.597732 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b14bb9ff6f80 13 hours ago 1.59GB 2025-07-12 14:16:11.597743 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 7007172fb408 13 hours ago 410MB 2025-07-12 14:16:11.597754 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 aad3a3158749 13 hours ago 353MB 2025-07-12 14:16:11.597764 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7743da2fe9b2 13 hours ago 358MB 2025-07-12 14:16:11.597775 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e89c3afadc38 13 hours ago 344MB 2025-07-12 14:16:11.597787 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 2cebeabcbd0e 13 hours ago 351MB 2025-07-12 14:16:11.597798 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 adada41a764e 13 hours ago 1.21GB 2025-07-12 14:16:11.597809 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 15e39d968d77 13 hours ago 361MB 2025-07-12 14:16:11.597819 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 abe28dfb5ccc 13 hours ago 361MB 2025-07-12 14:16:11.597830 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 e8b0ed492d0f 13 hours ago 324MB 2025-07-12 14:16:11.597865 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29b0dc955a2b 13 hours ago 590MB 2025-07-12 14:16:11.597877 | orchestrator | registry.osism.tech/kolla/redis 2024.2 82d7de98b313 13 hours ago 324MB 2025-07-12 14:16:11.597887 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6ad384c8beaf 13 hours ago 1.04GB 2025-07-12 14:16:11.597922 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 95944a9fdd62 13 hours ago 1.05GB 2025-07-12 14:16:11.597934 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 52bc7fc0663b 13 hours ago 1.06GB 2025-07-12 14:16:11.597944 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0e5d94078a38 13 hours ago 1.05GB 2025-07-12 14:16:11.597955 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d803f5dcba2b 13 hours ago 1.06GB 2025-07-12 14:16:11.597966 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 7bd70fa2eaca 13 hours ago 1.05GB 2025-07-12 14:16:11.597976 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 47afb51ae8f8 13 hours ago 1.05GB 2025-07-12 14:16:11.597987 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 f8a3d90ad64b 13 hours ago 1.1GB 2025-07-12 14:16:11.598001 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 1a864f84d2f1 13 hours ago 1.1GB 2025-07-12 14:16:11.598063 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 7b97136c8365 13 hours ago 1.1GB 2025-07-12 14:16:11.598094 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 6afb7ebf1f84 13 hours ago 1.12GB 2025-07-12 14:16:11.598114 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 270474f08bd9 13 hours ago 1.12GB 2025-07-12 14:16:11.598131 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 ea10afd51d8e 13 hours ago 1.24GB 2025-07-12 14:16:11.598183 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 a80373d5f022 13 hours ago 1.31GB 2025-07-12 14:16:11.598197 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 373788c4de01 13 hours ago 1.2GB 2025-07-12 14:16:11.598208 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 e76aee078f81 13 hours ago 1.11GB 2025-07-12 14:16:11.598218 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 b7f54fc3ae64 13 hours ago 1.13GB 2025-07-12 14:16:11.598229 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 24e61d9295a6 13 hours ago 1.11GB 2025-07-12 14:16:11.598239 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 3e1a1d846e00 13 hours ago 1.29GB 2025-07-12 14:16:11.598250 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 97653d20b217 13 hours ago 1.42GB 2025-07-12 14:16:11.598261 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 06efaffd4461 13 hours ago 1.29GB 2025-07-12 14:16:11.598271 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 344e73ee870a 13 hours ago 1.29GB 2025-07-12 14:16:11.598282 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 896a5e1d5e1a 13 hours ago 1.11GB 2025-07-12 14:16:11.598292 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 915a079aa111 13 hours ago 1.11GB 2025-07-12 14:16:11.598303 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 d9c02e5ae275 13 hours ago 1.04GB 2025-07-12 14:16:11.598315 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 1d72d0a0f668 13 hours ago 1.04GB 2025-07-12 14:16:11.598325 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 ed5bf0762532 13 hours ago 1.06GB 2025-07-12 14:16:11.598336 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 3528d69772e2 13 hours ago 1.06GB 2025-07-12 14:16:11.598347 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 55b4043ace1e 13 hours ago 1.06GB 2025-07-12 14:16:11.598384 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 a7f0a5d9b28c 13 hours ago 1.15GB 2025-07-12 14:16:11.598396 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 820d96fc6871 13 hours ago 1.41GB 2025-07-12 14:16:11.598418 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 9e62aa5265cd 13 hours ago 1.41GB 2025-07-12 14:16:11.598430 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 a98dd1df23f2 13 hours ago 1.04GB 2025-07-12 14:16:11.598440 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 1a25044bfbed 13 hours ago 1.04GB 2025-07-12 14:16:11.598451 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 59af0b95f004 13 hours ago 1.04GB 2025-07-12 14:16:11.598461 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 6a2f93c44023 13 hours ago 1.04GB 2025-07-12 14:16:11.598472 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 6e5bcb7465c5 13 hours ago 946MB 2025-07-12 14:16:11.598483 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b96ae4c576bd 13 hours ago 947MB 2025-07-12 14:16:11.598493 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c088724e55ba 13 hours ago 946MB 2025-07-12 14:16:11.598504 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 4b91bbc5fcc8 13 hours ago 947MB 2025-07-12 14:16:11.942323 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 14:16:11.942753 | orchestrator | ++ semver latest 5.0.0 2025-07-12 14:16:12.007213 | orchestrator | 2025-07-12 14:16:12.007294 | orchestrator | ## Containers @ testbed-node-1 2025-07-12 14:16:12.007308 | orchestrator | 2025-07-12 14:16:12.007320 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 14:16:12.007331 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 14:16:12.007342 | orchestrator | + echo 2025-07-12 14:16:12.007380 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-07-12 14:16:12.007402 | orchestrator | + echo 2025-07-12 14:16:12.007419 | orchestrator | + osism container testbed-node-1 ps 2025-07-12 14:16:14.216312 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 14:16:14.216492 | orchestrator | e5ddf2ec3ebe registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-12 14:16:14.216511 | orchestrator | 3871bfe6f74a registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-12 14:16:14.216523 | orchestrator | 79306cb17edb registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-12 14:16:14.216534 | orchestrator | bb172b0ee829 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-07-12 14:16:14.216545 | orchestrator | 6b7f27b63291 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 14:16:14.216555 | orchestrator | 40e4b15f0db0 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-07-12 14:16:14.216566 | orchestrator | 96c8b1fe3bb9 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-07-12 14:16:14.216595 | orchestrator | 35a67ff8fc14 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-07-12 14:16:14.216611 | orchestrator | 6059cae14637 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-07-12 14:16:14.216628 | orchestrator | dfd73c045be1 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-07-12 14:16:14.216664 | orchestrator | 309f0f83c588 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-07-12 14:16:14.216676 | orchestrator | 6064c076d933 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-07-12 14:16:14.216687 | orchestrator | cc42c996864e registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-12 14:16:14.216698 | orchestrator | 07c94f40cc01 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-07-12 14:16:14.216708 | orchestrator | 04d515cb6c13 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-07-12 14:16:14.216719 | orchestrator | 0e3d054903fd registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-12 14:16:14.216729 | orchestrator | a82fdeae6b54 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-12 14:16:14.216745 | orchestrator | 354af54a3488 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-07-12 14:16:14.216764 | orchestrator | 2c1d942f2b43 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-07-12 14:16:14.216783 | orchestrator | b43c5d6d2c98 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-07-12 14:16:14.216806 | orchestrator | 0a6f6f288049 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-07-12 14:16:14.216851 | orchestrator | ba87b73e2cee registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-07-12 14:16:14.216873 | orchestrator | b9d09891525c registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-07-12 14:16:14.216893 | orchestrator | 061dee04b17d registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-12 14:16:14.216907 | orchestrator | 4bff1ff8577e registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-12 14:16:14.216921 | orchestrator | 97abc024fbce registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-07-12 14:16:14.216933 | orchestrator | 0474e69b6694 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-07-12 14:16:14.216945 | orchestrator | ba184fe36c9e registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-12 14:16:14.216965 | orchestrator | 70a5ef3d7272 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-12 14:16:14.216977 | orchestrator | 86775e65a473 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-12 14:16:14.217000 | orchestrator | d6767f5c096d registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-12 14:16:14.217013 | orchestrator | 089f6b9d1a5c registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-12 14:16:14.217025 | orchestrator | c091b0035376 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-07-12 14:16:14.217037 | orchestrator | d4be43791e09 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-12 14:16:14.217050 | orchestrator | 148f6f7cb250 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-07-12 14:16:14.217062 | orchestrator | b353ecc2c5da registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-12 14:16:14.217074 | orchestrator | 53b879c40594 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-12 14:16:14.217086 | orchestrator | 7ce622139922 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-12 14:16:14.217099 | orchestrator | ab7752f234f2 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-07-12 14:16:14.217110 | orchestrator | 27e3b7c1c262 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-07-12 14:16:14.217122 | orchestrator | 4278af2927e9 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-12 14:16:14.217135 | orchestrator | 9f9613441af1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-12 14:16:14.217147 | orchestrator | 98196abab815 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-07-12 14:16:14.217159 | orchestrator | 0c2dee3c442c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-07-12 14:16:14.217179 | orchestrator | c612ed89a900 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-12 14:16:14.217190 | orchestrator | 59b2da031f94 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-07-12 14:16:14.217201 | orchestrator | 66cd0d9cf47b registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-12 14:16:14.217212 | orchestrator | 64af28f942ea registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-12 14:16:14.217223 | orchestrator | fd9764869713 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-12 14:16:14.217240 | orchestrator | b2b9a9110294 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-12 14:16:14.217251 | orchestrator | d35f80f57a0d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-07-12 14:16:14.217262 | orchestrator | 0d5f5d9800ab registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 14:16:14.531119 | orchestrator | 2025-07-12 14:16:14.531230 | orchestrator | ## Images @ testbed-node-1 2025-07-12 14:16:14.531247 | orchestrator | 2025-07-12 14:16:14.531259 | orchestrator | + echo 2025-07-12 14:16:14.531271 | orchestrator | + echo '## Images @ testbed-node-1' 2025-07-12 14:16:14.531283 | orchestrator | + echo 2025-07-12 14:16:14.531315 | orchestrator | + osism container testbed-node-1 images 2025-07-12 14:16:16.769064 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 14:16:16.769176 | orchestrator | registry.osism.tech/osism/ceph-daemon reef fe9c699108e1 11 hours ago 1.27GB 2025-07-12 14:16:16.769191 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 da9bab98f1c4 13 hours ago 1.01GB 2025-07-12 14:16:16.769203 | orchestrator | registry.osism.tech/kolla/cron 2024.2 4ce8240a893c 13 hours ago 318MB 2025-07-12 14:16:16.769235 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f19504b04274 13 hours ago 318MB 2025-07-12 14:16:16.769246 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 ea215f3799eb 13 hours ago 375MB 2025-07-12 14:16:16.769257 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f6a8ddc0fa19 13 hours ago 746MB 2025-07-12 14:16:16.769268 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 db9179df457c 13 hours ago 417MB 2025-07-12 14:16:16.769278 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cb87a0b5a431 13 hours ago 628MB 2025-07-12 14:16:16.769289 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2ee2aea4ecbb 13 hours ago 329MB 2025-07-12 14:16:16.769299 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 ec7afc7181a3 13 hours ago 326MB 2025-07-12 14:16:16.769310 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 9a6d9feb60b1 13 hours ago 1.55GB 2025-07-12 14:16:16.769321 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b14bb9ff6f80 13 hours ago 1.59GB 2025-07-12 14:16:16.769331 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 7007172fb408 13 hours ago 410MB 2025-07-12 14:16:16.769342 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 aad3a3158749 13 hours ago 353MB 2025-07-12 14:16:16.769353 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7743da2fe9b2 13 hours ago 358MB 2025-07-12 14:16:16.769388 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e89c3afadc38 13 hours ago 344MB 2025-07-12 14:16:16.769399 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 2cebeabcbd0e 13 hours ago 351MB 2025-07-12 14:16:16.769410 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 adada41a764e 13 hours ago 1.21GB 2025-07-12 14:16:16.769421 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 15e39d968d77 13 hours ago 361MB 2025-07-12 14:16:16.769432 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 abe28dfb5ccc 13 hours ago 361MB 2025-07-12 14:16:16.769443 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 e8b0ed492d0f 13 hours ago 324MB 2025-07-12 14:16:16.769454 | orchestrator | registry.osism.tech/kolla/redis 2024.2 82d7de98b313 13 hours ago 324MB 2025-07-12 14:16:16.769486 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29b0dc955a2b 13 hours ago 590MB 2025-07-12 14:16:16.769497 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6ad384c8beaf 13 hours ago 1.04GB 2025-07-12 14:16:16.769508 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 95944a9fdd62 13 hours ago 1.05GB 2025-07-12 14:16:16.769518 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 52bc7fc0663b 13 hours ago 1.06GB 2025-07-12 14:16:16.769529 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0e5d94078a38 13 hours ago 1.05GB 2025-07-12 14:16:16.769539 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d803f5dcba2b 13 hours ago 1.06GB 2025-07-12 14:16:16.769550 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 7bd70fa2eaca 13 hours ago 1.05GB 2025-07-12 14:16:16.769561 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 47afb51ae8f8 13 hours ago 1.05GB 2025-07-12 14:16:16.769571 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 ea10afd51d8e 13 hours ago 1.24GB 2025-07-12 14:16:16.769582 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 a80373d5f022 13 hours ago 1.31GB 2025-07-12 14:16:16.769592 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 373788c4de01 13 hours ago 1.2GB 2025-07-12 14:16:16.769603 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 e76aee078f81 13 hours ago 1.11GB 2025-07-12 14:16:16.769620 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 b7f54fc3ae64 13 hours ago 1.13GB 2025-07-12 14:16:16.769638 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 24e61d9295a6 13 hours ago 1.11GB 2025-07-12 14:16:16.769683 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 3e1a1d846e00 13 hours ago 1.29GB 2025-07-12 14:16:16.769704 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 97653d20b217 13 hours ago 1.42GB 2025-07-12 14:16:16.769722 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 06efaffd4461 13 hours ago 1.29GB 2025-07-12 14:16:16.769739 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 344e73ee870a 13 hours ago 1.29GB 2025-07-12 14:16:16.769757 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 ed5bf0762532 13 hours ago 1.06GB 2025-07-12 14:16:16.769776 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 3528d69772e2 13 hours ago 1.06GB 2025-07-12 14:16:16.769807 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 55b4043ace1e 13 hours ago 1.06GB 2025-07-12 14:16:16.769827 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 a7f0a5d9b28c 13 hours ago 1.15GB 2025-07-12 14:16:16.769842 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 820d96fc6871 13 hours ago 1.41GB 2025-07-12 14:16:16.769853 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 9e62aa5265cd 13 hours ago 1.41GB 2025-07-12 14:16:16.769864 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 6e5bcb7465c5 13 hours ago 946MB 2025-07-12 14:16:16.769875 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c088724e55ba 13 hours ago 946MB 2025-07-12 14:16:16.769886 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b96ae4c576bd 13 hours ago 947MB 2025-07-12 14:16:16.769897 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 4b91bbc5fcc8 13 hours ago 947MB 2025-07-12 14:16:17.051263 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 14:16:17.051415 | orchestrator | ++ semver latest 5.0.0 2025-07-12 14:16:17.104197 | orchestrator | 2025-07-12 14:16:17.104290 | orchestrator | ## Containers @ testbed-node-2 2025-07-12 14:16:17.104329 | orchestrator | 2025-07-12 14:16:17.104340 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 14:16:17.104351 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 14:16:17.104397 | orchestrator | + echo 2025-07-12 14:16:17.104411 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-07-12 14:16:17.104424 | orchestrator | + echo 2025-07-12 14:16:17.104434 | orchestrator | + osism container testbed-node-2 ps 2025-07-12 14:16:19.335900 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 14:16:19.336000 | orchestrator | 4257f9660991 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-12 14:16:19.336015 | orchestrator | ac870ae00bc3 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-12 14:16:19.336027 | orchestrator | c4bb40fd38fd registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-12 14:16:19.336038 | orchestrator | 6e06f82f6562 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-07-12 14:16:19.336049 | orchestrator | c22682124057 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 14:16:19.336059 | orchestrator | 2a4e9078ea4e registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-07-12 14:16:19.336070 | orchestrator | dd9ccf3cffe5 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-07-12 14:16:19.336081 | orchestrator | def04d27f938 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) cinder_api 2025-07-12 14:16:19.336091 | orchestrator | 2c54f7c8f007 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-07-12 14:16:19.336102 | orchestrator | 5fbf770afde7 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-07-12 14:16:19.336113 | orchestrator | ff343653f1f8 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-07-12 14:16:19.336124 | orchestrator | a5730256c983 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-07-12 14:16:19.336134 | orchestrator | ac20d1836a19 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-07-12 14:16:19.336145 | orchestrator | 35638e532681 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_conductor 2025-07-12 14:16:19.336156 | orchestrator | d2ada2edd1c9 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) magnum_api 2025-07-12 14:16:19.336166 | orchestrator | bc09eccf5aee registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-12 14:16:19.336177 | orchestrator | dc4e9ecfa55b registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-12 14:16:19.336213 | orchestrator | 314d19cc45eb registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-07-12 14:16:19.336224 | orchestrator | 90d87f31f083 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-07-12 14:16:19.336235 | orchestrator | 390096c81845 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-07-12 14:16:19.336246 | orchestrator | dc55d76d4b68 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-07-12 14:16:19.336273 | orchestrator | 3bc5a7edb652 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-07-12 14:16:19.336303 | orchestrator | 83c60d9285cf registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2025-07-12 14:16:19.336314 | orchestrator | 14eed48b83f6 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-12 14:16:19.336325 | orchestrator | ac7e6c3f5ba5 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-12 14:16:19.336336 | orchestrator | 021caf8c4ba7 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-07-12 14:16:19.336346 | orchestrator | e5c825a8b351 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-07-12 14:16:19.336357 | orchestrator | e64bb3e6528b registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-12 14:16:19.336423 | orchestrator | 296d7c88b3ae registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-12 14:16:19.336447 | orchestrator | b44063dbe8ed registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-12 14:16:19.336466 | orchestrator | 331f68fc5298 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-12 14:16:19.336482 | orchestrator | e043c4cf26a4 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-12 14:16:19.336495 | orchestrator | eb7e0d30aae7 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-12 14:16:19.336507 | orchestrator | 943ce42ca2b1 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-12 14:16:19.336520 | orchestrator | 80542e010ad0 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-07-12 14:16:19.336532 | orchestrator | bd214e51f4b3 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-12 14:16:19.336543 | orchestrator | 2334e137b597 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-12 14:16:19.336564 | orchestrator | 7e12d848d3c2 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-12 14:16:19.336575 | orchestrator | ad84f684208c registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-07-12 14:16:19.336586 | orchestrator | b5e9786ea331 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-07-12 14:16:19.336596 | orchestrator | b2141090e2ae registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-12 14:16:19.336607 | orchestrator | b9a77d456a1c registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-12 14:16:19.336618 | orchestrator | 940abdebc40e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-07-12 14:16:19.336628 | orchestrator | dece65df6ad1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-07-12 14:16:19.336646 | orchestrator | 4bb6ebf8b7f0 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-12 14:16:19.336657 | orchestrator | c0f97d66963d registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-07-12 14:16:19.336668 | orchestrator | af666c034a92 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-12 14:16:19.336678 | orchestrator | 1b19875870cb registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-12 14:16:19.336689 | orchestrator | 4bf78e47da99 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-12 14:16:19.336699 | orchestrator | bf1e393d39f8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-12 14:16:19.336710 | orchestrator | eb1dafc242f2 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-12 14:16:19.336720 | orchestrator | 40c833bb97a7 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 14:16:19.629963 | orchestrator | 2025-07-12 14:16:19.630141 | orchestrator | ## Images @ testbed-node-2 2025-07-12 14:16:19.630159 | orchestrator | 2025-07-12 14:16:19.630171 | orchestrator | + echo 2025-07-12 14:16:19.630182 | orchestrator | + echo '## Images @ testbed-node-2' 2025-07-12 14:16:19.630194 | orchestrator | + echo 2025-07-12 14:16:19.630205 | orchestrator | + osism container testbed-node-2 images 2025-07-12 14:16:21.905327 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 14:16:21.905488 | orchestrator | registry.osism.tech/osism/ceph-daemon reef fe9c699108e1 11 hours ago 1.27GB 2025-07-12 14:16:21.905506 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 da9bab98f1c4 13 hours ago 1.01GB 2025-07-12 14:16:21.905518 | orchestrator | registry.osism.tech/kolla/cron 2024.2 4ce8240a893c 13 hours ago 318MB 2025-07-12 14:16:21.905529 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f19504b04274 13 hours ago 318MB 2025-07-12 14:16:21.906247 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 ea215f3799eb 13 hours ago 375MB 2025-07-12 14:16:21.906288 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f6a8ddc0fa19 13 hours ago 746MB 2025-07-12 14:16:21.906300 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 db9179df457c 13 hours ago 417MB 2025-07-12 14:16:21.906311 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cb87a0b5a431 13 hours ago 628MB 2025-07-12 14:16:21.906321 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2ee2aea4ecbb 13 hours ago 329MB 2025-07-12 14:16:21.906332 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 ec7afc7181a3 13 hours ago 326MB 2025-07-12 14:16:21.906343 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 9a6d9feb60b1 13 hours ago 1.55GB 2025-07-12 14:16:21.906354 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b14bb9ff6f80 13 hours ago 1.59GB 2025-07-12 14:16:21.906370 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 7007172fb408 13 hours ago 410MB 2025-07-12 14:16:21.906422 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 aad3a3158749 13 hours ago 353MB 2025-07-12 14:16:21.906442 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7743da2fe9b2 13 hours ago 358MB 2025-07-12 14:16:21.906461 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e89c3afadc38 13 hours ago 344MB 2025-07-12 14:16:21.906476 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 2cebeabcbd0e 13 hours ago 351MB 2025-07-12 14:16:21.906486 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 adada41a764e 13 hours ago 1.21GB 2025-07-12 14:16:21.906497 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 15e39d968d77 13 hours ago 361MB 2025-07-12 14:16:21.906507 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 abe28dfb5ccc 13 hours ago 361MB 2025-07-12 14:16:21.906518 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 e8b0ed492d0f 13 hours ago 324MB 2025-07-12 14:16:21.906528 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29b0dc955a2b 13 hours ago 590MB 2025-07-12 14:16:21.906539 | orchestrator | registry.osism.tech/kolla/redis 2024.2 82d7de98b313 13 hours ago 324MB 2025-07-12 14:16:21.906549 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6ad384c8beaf 13 hours ago 1.04GB 2025-07-12 14:16:21.906560 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 95944a9fdd62 13 hours ago 1.05GB 2025-07-12 14:16:21.906570 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 52bc7fc0663b 13 hours ago 1.06GB 2025-07-12 14:16:21.906580 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0e5d94078a38 13 hours ago 1.05GB 2025-07-12 14:16:21.906591 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d803f5dcba2b 13 hours ago 1.06GB 2025-07-12 14:16:21.906601 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 7bd70fa2eaca 13 hours ago 1.05GB 2025-07-12 14:16:21.906612 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 47afb51ae8f8 13 hours ago 1.05GB 2025-07-12 14:16:21.906622 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 ea10afd51d8e 13 hours ago 1.24GB 2025-07-12 14:16:21.906633 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 a80373d5f022 13 hours ago 1.31GB 2025-07-12 14:16:21.906643 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 373788c4de01 13 hours ago 1.2GB 2025-07-12 14:16:21.906654 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 e76aee078f81 13 hours ago 1.11GB 2025-07-12 14:16:21.906676 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 b7f54fc3ae64 13 hours ago 1.13GB 2025-07-12 14:16:21.906686 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 24e61d9295a6 13 hours ago 1.11GB 2025-07-12 14:16:21.906719 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 3e1a1d846e00 13 hours ago 1.29GB 2025-07-12 14:16:21.906730 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 97653d20b217 13 hours ago 1.42GB 2025-07-12 14:16:21.906741 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 06efaffd4461 13 hours ago 1.29GB 2025-07-12 14:16:21.906751 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 344e73ee870a 13 hours ago 1.29GB 2025-07-12 14:16:21.906761 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 ed5bf0762532 13 hours ago 1.06GB 2025-07-12 14:16:21.906772 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 3528d69772e2 13 hours ago 1.06GB 2025-07-12 14:16:21.906782 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 55b4043ace1e 13 hours ago 1.06GB 2025-07-12 14:16:21.906793 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 a7f0a5d9b28c 13 hours ago 1.15GB 2025-07-12 14:16:21.906803 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 820d96fc6871 13 hours ago 1.41GB 2025-07-12 14:16:21.906814 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 9e62aa5265cd 13 hours ago 1.41GB 2025-07-12 14:16:21.906824 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 6e5bcb7465c5 13 hours ago 946MB 2025-07-12 14:16:21.906834 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c088724e55ba 13 hours ago 946MB 2025-07-12 14:16:21.906845 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b96ae4c576bd 13 hours ago 947MB 2025-07-12 14:16:21.906857 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 4b91bbc5fcc8 13 hours ago 947MB 2025-07-12 14:16:22.247635 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-07-12 14:16:22.255286 | orchestrator | + set -e 2025-07-12 14:16:22.255409 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 14:16:22.256434 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 14:16:22.256461 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 14:16:22.256472 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 14:16:22.257072 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 14:16:22.257094 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 14:16:22.257106 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 14:16:22.257117 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 14:16:22.257128 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 14:16:22.257139 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 14:16:22.257150 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 14:16:22.257161 | orchestrator | ++ export ARA=false 2025-07-12 14:16:22.257172 | orchestrator | ++ ARA=false 2025-07-12 14:16:22.257183 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 14:16:22.257194 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 14:16:22.257204 | orchestrator | ++ export TEMPEST=false 2025-07-12 14:16:22.257214 | orchestrator | ++ TEMPEST=false 2025-07-12 14:16:22.257225 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 14:16:22.257235 | orchestrator | ++ IS_ZUUL=true 2025-07-12 14:16:22.257246 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 14:16:22.257257 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 14:16:22.257267 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 14:16:22.257278 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 14:16:22.257288 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 14:16:22.257298 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 14:16:22.257309 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 14:16:22.257319 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 14:16:22.257330 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 14:16:22.257340 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 14:16:22.257351 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 14:16:22.257418 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-07-12 14:16:22.265329 | orchestrator | + set -e 2025-07-12 14:16:22.265362 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 14:16:22.265392 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 14:16:22.265403 | orchestrator | ++ INTERACTIVE=false 2025-07-12 14:16:22.265414 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 14:16:22.265424 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 14:16:22.265435 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 14:16:22.267168 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 14:16:22.274213 | orchestrator | 2025-07-12 14:16:22.274254 | orchestrator | # Ceph status 2025-07-12 14:16:22.274270 | orchestrator | 2025-07-12 14:16:22.274282 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 14:16:22.274294 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 14:16:22.274305 | orchestrator | + echo 2025-07-12 14:16:22.274316 | orchestrator | + echo '# Ceph status' 2025-07-12 14:16:22.274326 | orchestrator | + echo 2025-07-12 14:16:22.274337 | orchestrator | + ceph -s 2025-07-12 14:16:22.847506 | orchestrator | cluster: 2025-07-12 14:16:22.847614 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-07-12 14:16:22.847630 | orchestrator | health: HEALTH_OK 2025-07-12 14:16:22.847642 | orchestrator | 2025-07-12 14:16:22.847654 | orchestrator | services: 2025-07-12 14:16:22.847666 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-07-12 14:16:22.847678 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-2, testbed-node-0 2025-07-12 14:16:22.847690 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-07-12 14:16:22.847701 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-07-12 14:16:22.847713 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-07-12 14:16:22.847723 | orchestrator | 2025-07-12 14:16:22.847734 | orchestrator | data: 2025-07-12 14:16:22.847746 | orchestrator | volumes: 1/1 healthy 2025-07-12 14:16:22.847756 | orchestrator | pools: 14 pools, 401 pgs 2025-07-12 14:16:22.847767 | orchestrator | objects: 524 objects, 2.2 GiB 2025-07-12 14:16:22.847801 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-07-12 14:16:22.847813 | orchestrator | pgs: 401 active+clean 2025-07-12 14:16:22.847824 | orchestrator | 2025-07-12 14:16:22.891835 | orchestrator | 2025-07-12 14:16:22.891874 | orchestrator | # Ceph versions 2025-07-12 14:16:22.891886 | orchestrator | 2025-07-12 14:16:22.891898 | orchestrator | + echo 2025-07-12 14:16:22.891909 | orchestrator | + echo '# Ceph versions' 2025-07-12 14:16:22.891919 | orchestrator | + echo 2025-07-12 14:16:22.891930 | orchestrator | + ceph versions 2025-07-12 14:16:23.456456 | orchestrator | { 2025-07-12 14:16:23.456554 | orchestrator | "mon": { 2025-07-12 14:16:23.456571 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 14:16:23.456584 | orchestrator | }, 2025-07-12 14:16:23.456595 | orchestrator | "mgr": { 2025-07-12 14:16:23.456606 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 14:16:23.456616 | orchestrator | }, 2025-07-12 14:16:23.456627 | orchestrator | "osd": { 2025-07-12 14:16:23.456638 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-07-12 14:16:23.456648 | orchestrator | }, 2025-07-12 14:16:23.456659 | orchestrator | "mds": { 2025-07-12 14:16:23.456670 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 14:16:23.456680 | orchestrator | }, 2025-07-12 14:16:23.456691 | orchestrator | "rgw": { 2025-07-12 14:16:23.456701 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 14:16:23.456712 | orchestrator | }, 2025-07-12 14:16:23.456722 | orchestrator | "overall": { 2025-07-12 14:16:23.456733 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-07-12 14:16:23.456744 | orchestrator | } 2025-07-12 14:16:23.456755 | orchestrator | } 2025-07-12 14:16:23.497210 | orchestrator | 2025-07-12 14:16:23.497279 | orchestrator | # Ceph OSD tree 2025-07-12 14:16:23.497293 | orchestrator | 2025-07-12 14:16:23.497305 | orchestrator | + echo 2025-07-12 14:16:23.497317 | orchestrator | + echo '# Ceph OSD tree' 2025-07-12 14:16:23.497330 | orchestrator | + echo 2025-07-12 14:16:23.497341 | orchestrator | + ceph osd df tree 2025-07-12 14:16:24.071969 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-07-12 14:16:24.072117 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-07-12 14:16:24.072133 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-07-12 14:16:24.072144 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.59 0.95 196 up osd.2 2025-07-12 14:16:24.072155 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.24 1.05 194 up osd.5 2025-07-12 14:16:24.072166 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-07-12 14:16:24.072176 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.77 0.98 190 up osd.1 2025-07-12 14:16:24.072187 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 6.06 1.02 202 up osd.4 2025-07-12 14:16:24.072198 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-07-12 14:16:24.072218 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.16 1.04 175 up osd.0 2025-07-12 14:16:24.072229 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.67 0.96 213 up osd.3 2025-07-12 14:16:24.072240 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-07-12 14:16:24.072251 | orchestrator | MIN/MAX VAR: 0.95/1.05 STDDEV: 0.25 2025-07-12 14:16:24.121674 | orchestrator | 2025-07-12 14:16:24.121785 | orchestrator | # Ceph monitor status 2025-07-12 14:16:24.121808 | orchestrator | 2025-07-12 14:16:24.121826 | orchestrator | + echo 2025-07-12 14:16:24.121837 | orchestrator | + echo '# Ceph monitor status' 2025-07-12 14:16:24.121848 | orchestrator | + echo 2025-07-12 14:16:24.121859 | orchestrator | + ceph mon stat 2025-07-12 14:16:24.766730 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-07-12 14:16:24.820555 | orchestrator | 2025-07-12 14:16:24.820646 | orchestrator | # Ceph quorum status 2025-07-12 14:16:24.820662 | orchestrator | 2025-07-12 14:16:24.820675 | orchestrator | + echo 2025-07-12 14:16:24.820687 | orchestrator | + echo '# Ceph quorum status' 2025-07-12 14:16:24.820699 | orchestrator | + echo 2025-07-12 14:16:24.820827 | orchestrator | + ceph quorum_status 2025-07-12 14:16:24.820846 | orchestrator | + jq 2025-07-12 14:16:25.465757 | orchestrator | { 2025-07-12 14:16:25.465970 | orchestrator | "election_epoch": 8, 2025-07-12 14:16:25.465991 | orchestrator | "quorum": [ 2025-07-12 14:16:25.466003 | orchestrator | 0, 2025-07-12 14:16:25.466066 | orchestrator | 1, 2025-07-12 14:16:25.466081 | orchestrator | 2 2025-07-12 14:16:25.466093 | orchestrator | ], 2025-07-12 14:16:25.466104 | orchestrator | "quorum_names": [ 2025-07-12 14:16:25.466115 | orchestrator | "testbed-node-0", 2025-07-12 14:16:25.466125 | orchestrator | "testbed-node-1", 2025-07-12 14:16:25.466136 | orchestrator | "testbed-node-2" 2025-07-12 14:16:25.466147 | orchestrator | ], 2025-07-12 14:16:25.466158 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-07-12 14:16:25.466170 | orchestrator | "quorum_age": 1716, 2025-07-12 14:16:25.466181 | orchestrator | "features": { 2025-07-12 14:16:25.466192 | orchestrator | "quorum_con": "4540138322906710015", 2025-07-12 14:16:25.466203 | orchestrator | "quorum_mon": [ 2025-07-12 14:16:25.466213 | orchestrator | "kraken", 2025-07-12 14:16:25.466224 | orchestrator | "luminous", 2025-07-12 14:16:25.466235 | orchestrator | "mimic", 2025-07-12 14:16:25.466245 | orchestrator | "osdmap-prune", 2025-07-12 14:16:25.466256 | orchestrator | "nautilus", 2025-07-12 14:16:25.466267 | orchestrator | "octopus", 2025-07-12 14:16:25.466277 | orchestrator | "pacific", 2025-07-12 14:16:25.466288 | orchestrator | "elector-pinging", 2025-07-12 14:16:25.466298 | orchestrator | "quincy", 2025-07-12 14:16:25.466335 | orchestrator | "reef" 2025-07-12 14:16:25.466347 | orchestrator | ] 2025-07-12 14:16:25.466357 | orchestrator | }, 2025-07-12 14:16:25.466368 | orchestrator | "monmap": { 2025-07-12 14:16:25.466405 | orchestrator | "epoch": 1, 2025-07-12 14:16:25.466425 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-07-12 14:16:25.466445 | orchestrator | "modified": "2025-07-12T13:47:31.596039Z", 2025-07-12 14:16:25.466456 | orchestrator | "created": "2025-07-12T13:47:31.596039Z", 2025-07-12 14:16:25.466467 | orchestrator | "min_mon_release": 18, 2025-07-12 14:16:25.466478 | orchestrator | "min_mon_release_name": "reef", 2025-07-12 14:16:25.466489 | orchestrator | "election_strategy": 1, 2025-07-12 14:16:25.466499 | orchestrator | "disallowed_leaders: ": "", 2025-07-12 14:16:25.466510 | orchestrator | "stretch_mode": false, 2025-07-12 14:16:25.466521 | orchestrator | "tiebreaker_mon": "", 2025-07-12 14:16:25.466531 | orchestrator | "removed_ranks: ": "", 2025-07-12 14:16:25.466542 | orchestrator | "features": { 2025-07-12 14:16:25.466553 | orchestrator | "persistent": [ 2025-07-12 14:16:25.466564 | orchestrator | "kraken", 2025-07-12 14:16:25.466574 | orchestrator | "luminous", 2025-07-12 14:16:25.466585 | orchestrator | "mimic", 2025-07-12 14:16:25.466595 | orchestrator | "osdmap-prune", 2025-07-12 14:16:25.466606 | orchestrator | "nautilus", 2025-07-12 14:16:25.466616 | orchestrator | "octopus", 2025-07-12 14:16:25.466627 | orchestrator | "pacific", 2025-07-12 14:16:25.466638 | orchestrator | "elector-pinging", 2025-07-12 14:16:25.466649 | orchestrator | "quincy", 2025-07-12 14:16:25.466660 | orchestrator | "reef" 2025-07-12 14:16:25.466671 | orchestrator | ], 2025-07-12 14:16:25.466681 | orchestrator | "optional": [] 2025-07-12 14:16:25.466692 | orchestrator | }, 2025-07-12 14:16:25.466703 | orchestrator | "mons": [ 2025-07-12 14:16:25.466713 | orchestrator | { 2025-07-12 14:16:25.466724 | orchestrator | "rank": 0, 2025-07-12 14:16:25.466735 | orchestrator | "name": "testbed-node-0", 2025-07-12 14:16:25.466745 | orchestrator | "public_addrs": { 2025-07-12 14:16:25.466756 | orchestrator | "addrvec": [ 2025-07-12 14:16:25.466767 | orchestrator | { 2025-07-12 14:16:25.466777 | orchestrator | "type": "v2", 2025-07-12 14:16:25.466788 | orchestrator | "addr": "192.168.16.10:3300", 2025-07-12 14:16:25.466798 | orchestrator | "nonce": 0 2025-07-12 14:16:25.466809 | orchestrator | }, 2025-07-12 14:16:25.466820 | orchestrator | { 2025-07-12 14:16:25.466830 | orchestrator | "type": "v1", 2025-07-12 14:16:25.466841 | orchestrator | "addr": "192.168.16.10:6789", 2025-07-12 14:16:25.466852 | orchestrator | "nonce": 0 2025-07-12 14:16:25.466862 | orchestrator | } 2025-07-12 14:16:25.466873 | orchestrator | ] 2025-07-12 14:16:25.466884 | orchestrator | }, 2025-07-12 14:16:25.466894 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-07-12 14:16:25.466905 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-07-12 14:16:25.466916 | orchestrator | "priority": 0, 2025-07-12 14:16:25.466926 | orchestrator | "weight": 0, 2025-07-12 14:16:25.466937 | orchestrator | "crush_location": "{}" 2025-07-12 14:16:25.466947 | orchestrator | }, 2025-07-12 14:16:25.466958 | orchestrator | { 2025-07-12 14:16:25.466969 | orchestrator | "rank": 1, 2025-07-12 14:16:25.466980 | orchestrator | "name": "testbed-node-1", 2025-07-12 14:16:25.466991 | orchestrator | "public_addrs": { 2025-07-12 14:16:25.467002 | orchestrator | "addrvec": [ 2025-07-12 14:16:25.467012 | orchestrator | { 2025-07-12 14:16:25.467023 | orchestrator | "type": "v2", 2025-07-12 14:16:25.467034 | orchestrator | "addr": "192.168.16.11:3300", 2025-07-12 14:16:25.467044 | orchestrator | "nonce": 0 2025-07-12 14:16:25.467055 | orchestrator | }, 2025-07-12 14:16:25.467066 | orchestrator | { 2025-07-12 14:16:25.467076 | orchestrator | "type": "v1", 2025-07-12 14:16:25.467087 | orchestrator | "addr": "192.168.16.11:6789", 2025-07-12 14:16:25.467097 | orchestrator | "nonce": 0 2025-07-12 14:16:25.467108 | orchestrator | } 2025-07-12 14:16:25.467119 | orchestrator | ] 2025-07-12 14:16:25.467129 | orchestrator | }, 2025-07-12 14:16:25.467140 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-07-12 14:16:25.467151 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-07-12 14:16:25.467161 | orchestrator | "priority": 0, 2025-07-12 14:16:25.467172 | orchestrator | "weight": 0, 2025-07-12 14:16:25.467182 | orchestrator | "crush_location": "{}" 2025-07-12 14:16:25.467193 | orchestrator | }, 2025-07-12 14:16:25.467204 | orchestrator | { 2025-07-12 14:16:25.467223 | orchestrator | "rank": 2, 2025-07-12 14:16:25.467233 | orchestrator | "name": "testbed-node-2", 2025-07-12 14:16:25.467244 | orchestrator | "public_addrs": { 2025-07-12 14:16:25.467255 | orchestrator | "addrvec": [ 2025-07-12 14:16:25.467265 | orchestrator | { 2025-07-12 14:16:25.467276 | orchestrator | "type": "v2", 2025-07-12 14:16:25.467287 | orchestrator | "addr": "192.168.16.12:3300", 2025-07-12 14:16:25.467297 | orchestrator | "nonce": 0 2025-07-12 14:16:25.467308 | orchestrator | }, 2025-07-12 14:16:25.467319 | orchestrator | { 2025-07-12 14:16:25.467329 | orchestrator | "type": "v1", 2025-07-12 14:16:25.467340 | orchestrator | "addr": "192.168.16.12:6789", 2025-07-12 14:16:25.467351 | orchestrator | "nonce": 0 2025-07-12 14:16:25.467361 | orchestrator | } 2025-07-12 14:16:25.467372 | orchestrator | ] 2025-07-12 14:16:25.467424 | orchestrator | }, 2025-07-12 14:16:25.467436 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-07-12 14:16:25.467447 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-07-12 14:16:25.467458 | orchestrator | "priority": 0, 2025-07-12 14:16:25.467468 | orchestrator | "weight": 0, 2025-07-12 14:16:25.467479 | orchestrator | "crush_location": "{}" 2025-07-12 14:16:25.467490 | orchestrator | } 2025-07-12 14:16:25.467500 | orchestrator | ] 2025-07-12 14:16:25.467511 | orchestrator | } 2025-07-12 14:16:25.467522 | orchestrator | } 2025-07-12 14:16:25.467673 | orchestrator | 2025-07-12 14:16:25.467689 | orchestrator | # Ceph free space status 2025-07-12 14:16:25.467701 | orchestrator | 2025-07-12 14:16:25.467712 | orchestrator | + echo 2025-07-12 14:16:25.467723 | orchestrator | + echo '# Ceph free space status' 2025-07-12 14:16:25.467734 | orchestrator | + echo 2025-07-12 14:16:25.467745 | orchestrator | + ceph df 2025-07-12 14:16:26.039671 | orchestrator | --- RAW STORAGE --- 2025-07-12 14:16:26.039779 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-07-12 14:16:26.039808 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 14:16:26.039820 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 14:16:26.039832 | orchestrator | 2025-07-12 14:16:26.039843 | orchestrator | --- POOLS --- 2025-07-12 14:16:26.039855 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-07-12 14:16:26.039867 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-07-12 14:16:26.039879 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-07-12 14:16:26.039889 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-07-12 14:16:26.039900 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-07-12 14:16:26.039911 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-07-12 14:16:26.039922 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-07-12 14:16:26.039932 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-07-12 14:16:26.039943 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-07-12 14:16:26.039954 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-07-12 14:16:26.039965 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 14:16:26.039975 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 14:16:26.039986 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.89 35 GiB 2025-07-12 14:16:26.039997 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 14:16:26.040007 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 14:16:26.085272 | orchestrator | ++ semver latest 5.0.0 2025-07-12 14:16:26.139603 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 14:16:26.139708 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 14:16:26.139724 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-07-12 14:16:26.139735 | orchestrator | + osism apply facts 2025-07-12 14:16:28.005561 | orchestrator | 2025-07-12 14:16:28 | INFO  | Task 5cdc9783-9f54-470f-a6b6-4b21cbf6d886 (facts) was prepared for execution. 2025-07-12 14:16:28.005671 | orchestrator | 2025-07-12 14:16:28 | INFO  | It takes a moment until task 5cdc9783-9f54-470f-a6b6-4b21cbf6d886 (facts) has been started and output is visible here. 2025-07-12 14:16:41.275349 | orchestrator | 2025-07-12 14:16:41.275491 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 14:16:41.275510 | orchestrator | 2025-07-12 14:16:41.275523 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 14:16:41.275534 | orchestrator | Saturday 12 July 2025 14:16:32 +0000 (0:00:00.278) 0:00:00.278 ********* 2025-07-12 14:16:41.275546 | orchestrator | ok: [testbed-manager] 2025-07-12 14:16:41.275558 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:16:41.275569 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:16:41.275579 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:16:41.275590 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:16:41.275600 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:16:41.275611 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:16:41.275621 | orchestrator | 2025-07-12 14:16:41.275632 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 14:16:41.275643 | orchestrator | Saturday 12 July 2025 14:16:33 +0000 (0:00:01.505) 0:00:01.784 ********* 2025-07-12 14:16:41.275654 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:16:41.275665 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:16:41.275675 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:16:41.275686 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:16:41.275697 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:16:41.275707 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:16:41.275718 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:16:41.275728 | orchestrator | 2025-07-12 14:16:41.275739 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 14:16:41.275749 | orchestrator | 2025-07-12 14:16:41.275760 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 14:16:41.275770 | orchestrator | Saturday 12 July 2025 14:16:35 +0000 (0:00:01.291) 0:00:03.076 ********* 2025-07-12 14:16:41.275781 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:16:41.275792 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:16:41.275802 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:16:41.275813 | orchestrator | ok: [testbed-manager] 2025-07-12 14:16:41.275823 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:16:41.275834 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:16:41.275844 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:16:41.275855 | orchestrator | 2025-07-12 14:16:41.275865 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 14:16:41.275876 | orchestrator | 2025-07-12 14:16:41.275887 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 14:16:41.275898 | orchestrator | Saturday 12 July 2025 14:16:40 +0000 (0:00:05.098) 0:00:08.174 ********* 2025-07-12 14:16:41.275910 | orchestrator | skipping: [testbed-manager] 2025-07-12 14:16:41.275923 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:16:41.275935 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:16:41.275946 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:16:41.275958 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:16:41.275970 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:16:41.275982 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:16:41.275994 | orchestrator | 2025-07-12 14:16:41.276005 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:16:41.276018 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:41.276031 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:41.276043 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:41.276065 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:41.276096 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:41.276108 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:41.276120 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:16:41.276132 | orchestrator | 2025-07-12 14:16:41.276145 | orchestrator | 2025-07-12 14:16:41.276157 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:16:41.276169 | orchestrator | Saturday 12 July 2025 14:16:40 +0000 (0:00:00.540) 0:00:08.714 ********* 2025-07-12 14:16:41.276181 | orchestrator | =============================================================================== 2025-07-12 14:16:41.276193 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.10s 2025-07-12 14:16:41.276205 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.51s 2025-07-12 14:16:41.276217 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2025-07-12 14:16:41.276229 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-07-12 14:16:41.576833 | orchestrator | + osism validate ceph-mons 2025-07-12 14:17:13.456437 | orchestrator | 2025-07-12 14:17:13.456600 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-07-12 14:17:13.456619 | orchestrator | 2025-07-12 14:17:13.456632 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 14:17:13.456643 | orchestrator | Saturday 12 July 2025 14:16:57 +0000 (0:00:00.425) 0:00:00.425 ********* 2025-07-12 14:17:13.456675 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:13.456687 | orchestrator | 2025-07-12 14:17:13.456698 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 14:17:13.456709 | orchestrator | Saturday 12 July 2025 14:16:58 +0000 (0:00:00.660) 0:00:01.086 ********* 2025-07-12 14:17:13.456720 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:13.456731 | orchestrator | 2025-07-12 14:17:13.456742 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 14:17:13.456753 | orchestrator | Saturday 12 July 2025 14:16:59 +0000 (0:00:00.960) 0:00:02.046 ********* 2025-07-12 14:17:13.456764 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.456776 | orchestrator | 2025-07-12 14:17:13.456787 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 14:17:13.456798 | orchestrator | Saturday 12 July 2025 14:16:59 +0000 (0:00:00.257) 0:00:02.304 ********* 2025-07-12 14:17:13.456809 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.456820 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:13.456830 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:13.456841 | orchestrator | 2025-07-12 14:17:13.456853 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 14:17:13.456865 | orchestrator | Saturday 12 July 2025 14:17:00 +0000 (0:00:00.306) 0:00:02.611 ********* 2025-07-12 14:17:13.456875 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.456886 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:13.456897 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:13.456908 | orchestrator | 2025-07-12 14:17:13.456918 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 14:17:13.456929 | orchestrator | Saturday 12 July 2025 14:17:01 +0000 (0:00:01.219) 0:00:03.831 ********* 2025-07-12 14:17:13.456940 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.456951 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:17:13.456962 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:17:13.456975 | orchestrator | 2025-07-12 14:17:13.456988 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 14:17:13.457025 | orchestrator | Saturday 12 July 2025 14:17:01 +0000 (0:00:00.303) 0:00:04.134 ********* 2025-07-12 14:17:13.457038 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.457050 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:13.457062 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:13.457075 | orchestrator | 2025-07-12 14:17:13.457087 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:17:13.457099 | orchestrator | Saturday 12 July 2025 14:17:02 +0000 (0:00:00.508) 0:00:04.643 ********* 2025-07-12 14:17:13.457111 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.457123 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:13.457135 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:13.457147 | orchestrator | 2025-07-12 14:17:13.457159 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-07-12 14:17:13.457171 | orchestrator | Saturday 12 July 2025 14:17:02 +0000 (0:00:00.328) 0:00:04.972 ********* 2025-07-12 14:17:13.457184 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.457196 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:17:13.457208 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:17:13.457220 | orchestrator | 2025-07-12 14:17:13.457233 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-07-12 14:17:13.457246 | orchestrator | Saturday 12 July 2025 14:17:02 +0000 (0:00:00.288) 0:00:05.260 ********* 2025-07-12 14:17:13.457258 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.457270 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:13.457282 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:13.457294 | orchestrator | 2025-07-12 14:17:13.457306 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:17:13.457319 | orchestrator | Saturday 12 July 2025 14:17:02 +0000 (0:00:00.294) 0:00:05.555 ********* 2025-07-12 14:17:13.457331 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.457341 | orchestrator | 2025-07-12 14:17:13.457352 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:17:13.457363 | orchestrator | Saturday 12 July 2025 14:17:03 +0000 (0:00:00.261) 0:00:05.816 ********* 2025-07-12 14:17:13.457374 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.457384 | orchestrator | 2025-07-12 14:17:13.457396 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:17:13.457406 | orchestrator | Saturday 12 July 2025 14:17:03 +0000 (0:00:00.690) 0:00:06.507 ********* 2025-07-12 14:17:13.457417 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.457428 | orchestrator | 2025-07-12 14:17:13.457439 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:13.457450 | orchestrator | Saturday 12 July 2025 14:17:04 +0000 (0:00:00.238) 0:00:06.745 ********* 2025-07-12 14:17:13.457523 | orchestrator | 2025-07-12 14:17:13.457536 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:13.457547 | orchestrator | Saturday 12 July 2025 14:17:04 +0000 (0:00:00.067) 0:00:06.813 ********* 2025-07-12 14:17:13.457558 | orchestrator | 2025-07-12 14:17:13.457569 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:13.457580 | orchestrator | Saturday 12 July 2025 14:17:04 +0000 (0:00:00.066) 0:00:06.880 ********* 2025-07-12 14:17:13.457591 | orchestrator | 2025-07-12 14:17:13.457602 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:17:13.457613 | orchestrator | Saturday 12 July 2025 14:17:04 +0000 (0:00:00.069) 0:00:06.950 ********* 2025-07-12 14:17:13.457624 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.457635 | orchestrator | 2025-07-12 14:17:13.457646 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 14:17:13.457657 | orchestrator | Saturday 12 July 2025 14:17:04 +0000 (0:00:00.226) 0:00:07.177 ********* 2025-07-12 14:17:13.457667 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.457678 | orchestrator | 2025-07-12 14:17:13.457708 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-07-12 14:17:13.457729 | orchestrator | Saturday 12 July 2025 14:17:04 +0000 (0:00:00.249) 0:00:07.426 ********* 2025-07-12 14:17:13.457740 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.457751 | orchestrator | 2025-07-12 14:17:13.457762 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-07-12 14:17:13.457773 | orchestrator | Saturday 12 July 2025 14:17:04 +0000 (0:00:00.110) 0:00:07.537 ********* 2025-07-12 14:17:13.457784 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:17:13.457794 | orchestrator | 2025-07-12 14:17:13.457805 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-07-12 14:17:13.457816 | orchestrator | Saturday 12 July 2025 14:17:06 +0000 (0:00:01.556) 0:00:09.093 ********* 2025-07-12 14:17:13.457827 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.457837 | orchestrator | 2025-07-12 14:17:13.457866 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-07-12 14:17:13.457878 | orchestrator | Saturday 12 July 2025 14:17:06 +0000 (0:00:00.317) 0:00:09.410 ********* 2025-07-12 14:17:13.457889 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.457900 | orchestrator | 2025-07-12 14:17:13.457911 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-07-12 14:17:13.457922 | orchestrator | Saturday 12 July 2025 14:17:06 +0000 (0:00:00.124) 0:00:09.535 ********* 2025-07-12 14:17:13.457932 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.457943 | orchestrator | 2025-07-12 14:17:13.457954 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-07-12 14:17:13.457965 | orchestrator | Saturday 12 July 2025 14:17:07 +0000 (0:00:00.500) 0:00:10.036 ********* 2025-07-12 14:17:13.457976 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.457987 | orchestrator | 2025-07-12 14:17:13.457998 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-07-12 14:17:13.458009 | orchestrator | Saturday 12 July 2025 14:17:07 +0000 (0:00:00.304) 0:00:10.340 ********* 2025-07-12 14:17:13.458083 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.458095 | orchestrator | 2025-07-12 14:17:13.458106 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-07-12 14:17:13.458117 | orchestrator | Saturday 12 July 2025 14:17:07 +0000 (0:00:00.126) 0:00:10.467 ********* 2025-07-12 14:17:13.458127 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.458138 | orchestrator | 2025-07-12 14:17:13.458149 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-07-12 14:17:13.458160 | orchestrator | Saturday 12 July 2025 14:17:07 +0000 (0:00:00.131) 0:00:10.599 ********* 2025-07-12 14:17:13.458170 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.458181 | orchestrator | 2025-07-12 14:17:13.458192 | orchestrator | TASK [Gather status data] ****************************************************** 2025-07-12 14:17:13.458202 | orchestrator | Saturday 12 July 2025 14:17:08 +0000 (0:00:00.137) 0:00:10.736 ********* 2025-07-12 14:17:13.458213 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:17:13.458224 | orchestrator | 2025-07-12 14:17:13.458234 | orchestrator | TASK [Set health test data] **************************************************** 2025-07-12 14:17:13.458245 | orchestrator | Saturday 12 July 2025 14:17:09 +0000 (0:00:01.297) 0:00:12.034 ********* 2025-07-12 14:17:13.458256 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.458267 | orchestrator | 2025-07-12 14:17:13.458277 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-07-12 14:17:13.458288 | orchestrator | Saturday 12 July 2025 14:17:09 +0000 (0:00:00.284) 0:00:12.318 ********* 2025-07-12 14:17:13.458299 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.458310 | orchestrator | 2025-07-12 14:17:13.458320 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-07-12 14:17:13.458331 | orchestrator | Saturday 12 July 2025 14:17:09 +0000 (0:00:00.145) 0:00:12.464 ********* 2025-07-12 14:17:13.458341 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:13.458352 | orchestrator | 2025-07-12 14:17:13.458363 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-07-12 14:17:13.458381 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.157) 0:00:12.621 ********* 2025-07-12 14:17:13.458393 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.458403 | orchestrator | 2025-07-12 14:17:13.458414 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-07-12 14:17:13.458425 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.136) 0:00:12.758 ********* 2025-07-12 14:17:13.458435 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.458446 | orchestrator | 2025-07-12 14:17:13.458481 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 14:17:13.458494 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.133) 0:00:12.892 ********* 2025-07-12 14:17:13.458505 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:13.458516 | orchestrator | 2025-07-12 14:17:13.458526 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 14:17:13.458537 | orchestrator | Saturday 12 July 2025 14:17:10 +0000 (0:00:00.660) 0:00:13.553 ********* 2025-07-12 14:17:13.458548 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:13.458559 | orchestrator | 2025-07-12 14:17:13.458616 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:17:13.458629 | orchestrator | Saturday 12 July 2025 14:17:11 +0000 (0:00:00.226) 0:00:13.779 ********* 2025-07-12 14:17:13.458640 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:13.458651 | orchestrator | 2025-07-12 14:17:13.458661 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:17:13.458672 | orchestrator | Saturday 12 July 2025 14:17:12 +0000 (0:00:01.561) 0:00:15.340 ********* 2025-07-12 14:17:13.458683 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:13.458694 | orchestrator | 2025-07-12 14:17:13.458705 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:17:13.458715 | orchestrator | Saturday 12 July 2025 14:17:12 +0000 (0:00:00.248) 0:00:15.589 ********* 2025-07-12 14:17:13.458726 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:13.458737 | orchestrator | 2025-07-12 14:17:13.458757 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:15.550407 | orchestrator | Saturday 12 July 2025 14:17:13 +0000 (0:00:00.246) 0:00:15.836 ********* 2025-07-12 14:17:15.550548 | orchestrator | 2025-07-12 14:17:15.550564 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:15.550576 | orchestrator | Saturday 12 July 2025 14:17:13 +0000 (0:00:00.063) 0:00:15.899 ********* 2025-07-12 14:17:15.550600 | orchestrator | 2025-07-12 14:17:15.550612 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:15.550623 | orchestrator | Saturday 12 July 2025 14:17:13 +0000 (0:00:00.068) 0:00:15.968 ********* 2025-07-12 14:17:15.550633 | orchestrator | 2025-07-12 14:17:15.550644 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 14:17:15.550655 | orchestrator | Saturday 12 July 2025 14:17:13 +0000 (0:00:00.071) 0:00:16.040 ********* 2025-07-12 14:17:15.550667 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:15.550677 | orchestrator | 2025-07-12 14:17:15.550688 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:17:15.550699 | orchestrator | Saturday 12 July 2025 14:17:14 +0000 (0:00:01.265) 0:00:17.305 ********* 2025-07-12 14:17:15.550710 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 14:17:15.550721 | orchestrator |  "msg": [ 2025-07-12 14:17:15.550733 | orchestrator |  "Validator run completed.", 2025-07-12 14:17:15.550744 | orchestrator |  "You can find the report file here:", 2025-07-12 14:17:15.550755 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-07-12T14:16:58+00:00-report.json", 2025-07-12 14:17:15.550767 | orchestrator |  "on the following host:", 2025-07-12 14:17:15.550778 | orchestrator |  "testbed-manager" 2025-07-12 14:17:15.550819 | orchestrator |  ] 2025-07-12 14:17:15.550831 | orchestrator | } 2025-07-12 14:17:15.550842 | orchestrator | 2025-07-12 14:17:15.550853 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:17:15.550865 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 14:17:15.550877 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:17:15.550893 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:17:15.550904 | orchestrator | 2025-07-12 14:17:15.550915 | orchestrator | 2025-07-12 14:17:15.550925 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:17:15.550936 | orchestrator | Saturday 12 July 2025 14:17:15 +0000 (0:00:00.405) 0:00:17.711 ********* 2025-07-12 14:17:15.550949 | orchestrator | =============================================================================== 2025-07-12 14:17:15.550961 | orchestrator | Aggregate test results step one ----------------------------------------- 1.56s 2025-07-12 14:17:15.550973 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.56s 2025-07-12 14:17:15.550985 | orchestrator | Gather status data ------------------------------------------------------ 1.30s 2025-07-12 14:17:15.550998 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2025-07-12 14:17:15.551010 | orchestrator | Get container info ------------------------------------------------------ 1.22s 2025-07-12 14:17:15.551022 | orchestrator | Create report output directory ------------------------------------------ 0.96s 2025-07-12 14:17:15.551034 | orchestrator | Aggregate test results step two ----------------------------------------- 0.69s 2025-07-12 14:17:15.551046 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.66s 2025-07-12 14:17:15.551058 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-07-12 14:17:15.551070 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2025-07-12 14:17:15.551083 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.50s 2025-07-12 14:17:15.551095 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-07-12 14:17:15.551107 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-07-12 14:17:15.551119 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2025-07-12 14:17:15.551132 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-07-12 14:17:15.551144 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2025-07-12 14:17:15.551156 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-07-12 14:17:15.551168 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-07-12 14:17:15.551180 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2025-07-12 14:17:15.551192 | orchestrator | Set health test data ---------------------------------------------------- 0.28s 2025-07-12 14:17:15.846721 | orchestrator | + osism validate ceph-mgrs 2025-07-12 14:17:46.710904 | orchestrator | 2025-07-12 14:17:46.711017 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-07-12 14:17:46.711035 | orchestrator | 2025-07-12 14:17:46.711048 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 14:17:46.711061 | orchestrator | Saturday 12 July 2025 14:17:32 +0000 (0:00:00.431) 0:00:00.431 ********* 2025-07-12 14:17:46.711073 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:46.711084 | orchestrator | 2025-07-12 14:17:46.711096 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 14:17:46.711107 | orchestrator | Saturday 12 July 2025 14:17:32 +0000 (0:00:00.654) 0:00:01.085 ********* 2025-07-12 14:17:46.711144 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:46.711156 | orchestrator | 2025-07-12 14:17:46.711166 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 14:17:46.711177 | orchestrator | Saturday 12 July 2025 14:17:33 +0000 (0:00:00.830) 0:00:01.916 ********* 2025-07-12 14:17:46.711188 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.711200 | orchestrator | 2025-07-12 14:17:46.711211 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 14:17:46.711222 | orchestrator | Saturday 12 July 2025 14:17:33 +0000 (0:00:00.254) 0:00:02.171 ********* 2025-07-12 14:17:46.711233 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.711244 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:46.711254 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:46.711266 | orchestrator | 2025-07-12 14:17:46.711277 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 14:17:46.711288 | orchestrator | Saturday 12 July 2025 14:17:34 +0000 (0:00:00.285) 0:00:02.456 ********* 2025-07-12 14:17:46.711298 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.711309 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:46.711320 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:46.711330 | orchestrator | 2025-07-12 14:17:46.711341 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 14:17:46.711352 | orchestrator | Saturday 12 July 2025 14:17:35 +0000 (0:00:00.985) 0:00:03.442 ********* 2025-07-12 14:17:46.711363 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:46.711374 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:17:46.711384 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:17:46.711395 | orchestrator | 2025-07-12 14:17:46.711406 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 14:17:46.711417 | orchestrator | Saturday 12 July 2025 14:17:35 +0000 (0:00:00.289) 0:00:03.731 ********* 2025-07-12 14:17:46.711427 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.711441 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:46.711462 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:46.711475 | orchestrator | 2025-07-12 14:17:46.711487 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:17:46.711499 | orchestrator | Saturday 12 July 2025 14:17:35 +0000 (0:00:00.503) 0:00:04.235 ********* 2025-07-12 14:17:46.711538 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.711551 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:46.711563 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:46.711574 | orchestrator | 2025-07-12 14:17:46.711587 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-07-12 14:17:46.711599 | orchestrator | Saturday 12 July 2025 14:17:36 +0000 (0:00:00.306) 0:00:04.541 ********* 2025-07-12 14:17:46.711611 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:46.711623 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:17:46.711635 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:17:46.711647 | orchestrator | 2025-07-12 14:17:46.711659 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-07-12 14:17:46.711689 | orchestrator | Saturday 12 July 2025 14:17:36 +0000 (0:00:00.320) 0:00:04.861 ********* 2025-07-12 14:17:46.711702 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.711714 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:17:46.711726 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:17:46.711738 | orchestrator | 2025-07-12 14:17:46.711750 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:17:46.711763 | orchestrator | Saturday 12 July 2025 14:17:36 +0000 (0:00:00.301) 0:00:05.163 ********* 2025-07-12 14:17:46.711776 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:46.711788 | orchestrator | 2025-07-12 14:17:46.711800 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:17:46.711811 | orchestrator | Saturday 12 July 2025 14:17:37 +0000 (0:00:00.229) 0:00:05.392 ********* 2025-07-12 14:17:46.711831 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:46.711841 | orchestrator | 2025-07-12 14:17:46.711852 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:17:46.711863 | orchestrator | Saturday 12 July 2025 14:17:37 +0000 (0:00:00.671) 0:00:06.064 ********* 2025-07-12 14:17:46.711874 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:46.711885 | orchestrator | 2025-07-12 14:17:46.711895 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:46.711906 | orchestrator | Saturday 12 July 2025 14:17:37 +0000 (0:00:00.237) 0:00:06.301 ********* 2025-07-12 14:17:46.711917 | orchestrator | 2025-07-12 14:17:46.711927 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:46.711943 | orchestrator | Saturday 12 July 2025 14:17:38 +0000 (0:00:00.088) 0:00:06.390 ********* 2025-07-12 14:17:46.711954 | orchestrator | 2025-07-12 14:17:46.711965 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:46.711975 | orchestrator | Saturday 12 July 2025 14:17:38 +0000 (0:00:00.066) 0:00:06.456 ********* 2025-07-12 14:17:46.711986 | orchestrator | 2025-07-12 14:17:46.711997 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:17:46.712007 | orchestrator | Saturday 12 July 2025 14:17:38 +0000 (0:00:00.071) 0:00:06.527 ********* 2025-07-12 14:17:46.712018 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:46.712029 | orchestrator | 2025-07-12 14:17:46.712039 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 14:17:46.712050 | orchestrator | Saturday 12 July 2025 14:17:38 +0000 (0:00:00.256) 0:00:06.784 ********* 2025-07-12 14:17:46.712061 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:46.712072 | orchestrator | 2025-07-12 14:17:46.712100 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-07-12 14:17:46.712111 | orchestrator | Saturday 12 July 2025 14:17:38 +0000 (0:00:00.230) 0:00:07.015 ********* 2025-07-12 14:17:46.712122 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.712133 | orchestrator | 2025-07-12 14:17:46.712144 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-07-12 14:17:46.712155 | orchestrator | Saturday 12 July 2025 14:17:38 +0000 (0:00:00.119) 0:00:07.134 ********* 2025-07-12 14:17:46.712165 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:17:46.712176 | orchestrator | 2025-07-12 14:17:46.712187 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-07-12 14:17:46.712198 | orchestrator | Saturday 12 July 2025 14:17:40 +0000 (0:00:02.005) 0:00:09.140 ********* 2025-07-12 14:17:46.712208 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.712219 | orchestrator | 2025-07-12 14:17:46.712230 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-07-12 14:17:46.712240 | orchestrator | Saturday 12 July 2025 14:17:41 +0000 (0:00:00.238) 0:00:09.378 ********* 2025-07-12 14:17:46.712251 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.712262 | orchestrator | 2025-07-12 14:17:46.712273 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-07-12 14:17:46.712284 | orchestrator | Saturday 12 July 2025 14:17:41 +0000 (0:00:00.292) 0:00:09.671 ********* 2025-07-12 14:17:46.712294 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:46.712305 | orchestrator | 2025-07-12 14:17:46.712316 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-07-12 14:17:46.712327 | orchestrator | Saturday 12 July 2025 14:17:41 +0000 (0:00:00.332) 0:00:10.003 ********* 2025-07-12 14:17:46.712338 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:17:46.712348 | orchestrator | 2025-07-12 14:17:46.712359 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 14:17:46.712370 | orchestrator | Saturday 12 July 2025 14:17:41 +0000 (0:00:00.152) 0:00:10.156 ********* 2025-07-12 14:17:46.712381 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:46.712392 | orchestrator | 2025-07-12 14:17:46.712403 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 14:17:46.712422 | orchestrator | Saturday 12 July 2025 14:17:42 +0000 (0:00:00.272) 0:00:10.429 ********* 2025-07-12 14:17:46.712433 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:17:46.712444 | orchestrator | 2025-07-12 14:17:46.712455 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:17:46.712465 | orchestrator | Saturday 12 July 2025 14:17:42 +0000 (0:00:00.234) 0:00:10.664 ********* 2025-07-12 14:17:46.712476 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:46.712487 | orchestrator | 2025-07-12 14:17:46.712498 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:17:46.712579 | orchestrator | Saturday 12 July 2025 14:17:43 +0000 (0:00:01.247) 0:00:11.911 ********* 2025-07-12 14:17:46.712591 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:46.712602 | orchestrator | 2025-07-12 14:17:46.712613 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:17:46.712623 | orchestrator | Saturday 12 July 2025 14:17:43 +0000 (0:00:00.244) 0:00:12.156 ********* 2025-07-12 14:17:46.712634 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:46.712645 | orchestrator | 2025-07-12 14:17:46.712656 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:46.712666 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.233) 0:00:12.389 ********* 2025-07-12 14:17:46.712677 | orchestrator | 2025-07-12 14:17:46.712688 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:46.712698 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.067) 0:00:12.457 ********* 2025-07-12 14:17:46.712709 | orchestrator | 2025-07-12 14:17:46.712720 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:17:46.712730 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.067) 0:00:12.524 ********* 2025-07-12 14:17:46.712741 | orchestrator | 2025-07-12 14:17:46.712752 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 14:17:46.712762 | orchestrator | Saturday 12 July 2025 14:17:44 +0000 (0:00:00.070) 0:00:12.594 ********* 2025-07-12 14:17:46.712773 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 14:17:46.712783 | orchestrator | 2025-07-12 14:17:46.712794 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:17:46.712805 | orchestrator | Saturday 12 July 2025 14:17:45 +0000 (0:00:01.552) 0:00:14.147 ********* 2025-07-12 14:17:46.712816 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 14:17:46.712827 | orchestrator |  "msg": [ 2025-07-12 14:17:46.712838 | orchestrator |  "Validator run completed.", 2025-07-12 14:17:46.712849 | orchestrator |  "You can find the report file here:", 2025-07-12 14:17:46.712865 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-07-12T14:17:32+00:00-report.json", 2025-07-12 14:17:46.712877 | orchestrator |  "on the following host:", 2025-07-12 14:17:46.712888 | orchestrator |  "testbed-manager" 2025-07-12 14:17:46.712898 | orchestrator |  ] 2025-07-12 14:17:46.712909 | orchestrator | } 2025-07-12 14:17:46.712920 | orchestrator | 2025-07-12 14:17:46.712931 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:17:46.712943 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:17:46.712955 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:17:46.712976 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:17:47.056352 | orchestrator | 2025-07-12 14:17:47.056471 | orchestrator | 2025-07-12 14:17:47.056489 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:17:47.056561 | orchestrator | Saturday 12 July 2025 14:17:46 +0000 (0:00:00.853) 0:00:15.000 ********* 2025-07-12 14:17:47.056575 | orchestrator | =============================================================================== 2025-07-12 14:17:47.056586 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.01s 2025-07-12 14:17:47.056597 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2025-07-12 14:17:47.056608 | orchestrator | Aggregate test results step one ----------------------------------------- 1.25s 2025-07-12 14:17:47.056618 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2025-07-12 14:17:47.056629 | orchestrator | Print report file information ------------------------------------------- 0.85s 2025-07-12 14:17:47.056640 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-07-12 14:17:47.056651 | orchestrator | Aggregate test results step two ----------------------------------------- 0.67s 2025-07-12 14:17:47.056661 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-07-12 14:17:47.056672 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-07-12 14:17:47.056683 | orchestrator | Fail test if mgr modules are disabled that should be enabled ------------ 0.33s 2025-07-12 14:17:47.056693 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.32s 2025-07-12 14:17:47.056704 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-07-12 14:17:47.056715 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2025-07-12 14:17:47.056725 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.29s 2025-07-12 14:17:47.056736 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-07-12 14:17:47.056746 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-07-12 14:17:47.056757 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2025-07-12 14:17:47.056768 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-07-12 14:17:47.056778 | orchestrator | Define report vars ------------------------------------------------------ 0.25s 2025-07-12 14:17:47.056789 | orchestrator | Aggregate test results step two ----------------------------------------- 0.24s 2025-07-12 14:17:47.358863 | orchestrator | + osism validate ceph-osds 2025-07-12 14:18:07.862140 | orchestrator | 2025-07-12 14:18:07.862216 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-07-12 14:18:07.862224 | orchestrator | 2025-07-12 14:18:07.862229 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 14:18:07.862235 | orchestrator | Saturday 12 July 2025 14:18:03 +0000 (0:00:00.460) 0:00:00.460 ********* 2025-07-12 14:18:07.862240 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:07.862245 | orchestrator | 2025-07-12 14:18:07.862249 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 14:18:07.862253 | orchestrator | Saturday 12 July 2025 14:18:04 +0000 (0:00:00.636) 0:00:01.097 ********* 2025-07-12 14:18:07.862258 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:07.862262 | orchestrator | 2025-07-12 14:18:07.862266 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 14:18:07.862270 | orchestrator | Saturday 12 July 2025 14:18:04 +0000 (0:00:00.231) 0:00:01.328 ********* 2025-07-12 14:18:07.862274 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:07.862278 | orchestrator | 2025-07-12 14:18:07.862283 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 14:18:07.862287 | orchestrator | Saturday 12 July 2025 14:18:05 +0000 (0:00:00.997) 0:00:02.326 ********* 2025-07-12 14:18:07.862291 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:07.862296 | orchestrator | 2025-07-12 14:18:07.862301 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 14:18:07.862320 | orchestrator | Saturday 12 July 2025 14:18:05 +0000 (0:00:00.175) 0:00:02.502 ********* 2025-07-12 14:18:07.862325 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:07.862329 | orchestrator | 2025-07-12 14:18:07.862333 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 14:18:07.862337 | orchestrator | Saturday 12 July 2025 14:18:05 +0000 (0:00:00.139) 0:00:02.641 ********* 2025-07-12 14:18:07.862341 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:07.862345 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:07.862349 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:07.862353 | orchestrator | 2025-07-12 14:18:07.862358 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 14:18:07.862362 | orchestrator | Saturday 12 July 2025 14:18:06 +0000 (0:00:00.308) 0:00:02.950 ********* 2025-07-12 14:18:07.862366 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:07.862370 | orchestrator | 2025-07-12 14:18:07.862375 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 14:18:07.862379 | orchestrator | Saturday 12 July 2025 14:18:06 +0000 (0:00:00.143) 0:00:03.094 ********* 2025-07-12 14:18:07.862383 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:07.862388 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:07.862392 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:07.862396 | orchestrator | 2025-07-12 14:18:07.862400 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-07-12 14:18:07.862404 | orchestrator | Saturday 12 July 2025 14:18:06 +0000 (0:00:00.317) 0:00:03.411 ********* 2025-07-12 14:18:07.862408 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:07.862412 | orchestrator | 2025-07-12 14:18:07.862417 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:07.862421 | orchestrator | Saturday 12 July 2025 14:18:07 +0000 (0:00:00.510) 0:00:03.922 ********* 2025-07-12 14:18:07.862425 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:07.862429 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:07.862433 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:07.862437 | orchestrator | 2025-07-12 14:18:07.862441 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-07-12 14:18:07.862445 | orchestrator | Saturday 12 July 2025 14:18:07 +0000 (0:00:00.492) 0:00:04.414 ********* 2025-07-12 14:18:07.862451 | orchestrator | skipping: [testbed-node-3] => (item={'id': '220d8c25eb9e1a51c309e0ac587fcc7195f9b9fd5eb04511323a629611ed0b61', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:07.862457 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4d0b522f7b52c5ce4dd76b0b91ae1ea98a6391d5ca0a0b1614417f225f5f2332', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:07.862475 | orchestrator | skipping: [testbed-node-3] => (item={'id': '55d7f01269df28e08b58cc2782b15b5a5335e14ef6f8bc4ec9bfbc960cdea775', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-12 14:18:07.862481 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c8ea9c03b51787f9b97a9b9b48249d525e7a9055b4ba5cc0b5bb63492f9e3cf2', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:07.862485 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1d45908bbfb8e0a92c9d2eb1f62f7abf4fbe8c302f3e06542b33aaa2a20f0dd1', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:07.862503 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6acb2c4545204747a2fd79ec95291cb82d50231c976086861758300f35bd5a1b', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-07-12 14:18:07.862514 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cc173fb5fc88170c65ccd15c853fb607d65c1e7aa1f7b08d3f96f858f849607e', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-12 14:18:07.862519 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7350b5fbd421d5a3f61257e704cf4caf97ede607b22abfa2432539328a132823', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 14:18:07.862523 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e48fbeb6419565478e1744f194652660f765d44c5e64467152cc53d0d7df8a55', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-12 14:18:07.862527 | orchestrator | skipping: [testbed-node-3] => (item={'id': '78a8206ffd691310a618ef425e00bab36179734de3eca4d523d41970b36b11ea', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 14:18:07.862531 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7eed48f172853cb1b5765329465165a98ecba10a2f9c83ff3df58a729e74e84a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 14:18:07.862571 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5213596a2b2d4b6bbc76e096c034f9b5f0b151322878456a8d23da11fa3e238e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-12 14:18:07.862577 | orchestrator | ok: [testbed-node-3] => (item={'id': '63ae7fcab824f960e6cd4464a740a81660045258fb4c689ba6484286048c2396', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:07.862582 | orchestrator | ok: [testbed-node-3] => (item={'id': '718c01beb43cf96cc0e0cae42c4847fdef5103b16a640f0242620873a06cec11', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:07.862586 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5fc288dbb317c2f301f9da2f016d0644b7d4d19848f2b90946bfe8407fe6785f', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-12 14:18:07.862590 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5688f271d8d4673a6d48c4ddf325ae1086f78e67aaa586981ca20fd3a0bb617d', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-12 14:18:07.862595 | orchestrator | skipping: [testbed-node-3] => (item={'id': '03427d920aecd99d8c3862d758a351b77c8526191c5eba178063108b67392289', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 14:18:07.862599 | orchestrator | skipping: [testbed-node-3] => (item={'id': '72c9fb755562c14527319ebe0f24bf480bdf30112783af950a2b1002685062d1', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:07.862604 | orchestrator | skipping: [testbed-node-3] => (item={'id': '666d6cfd507ab5ca05df8babed943d2908068ca59a6da4eeca65fe6135ebdea9', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:07.862608 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2e2e2eecc5f4a35b9460eaec509e681ebda5283a584dfe5f87455f67b19daa88', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:07.862616 | orchestrator | skipping: [testbed-node-3] => (item={'id': '25f02cd0b980e0fd1f937643dd06b5ced27ea551c681a92db6a331203f72c36e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-12 14:18:07.862623 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a2f7bf93b52147e3662ee57cfe190a49e6a7c34b27fa89f8f8de170e744e2a15', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:08.140937 | orchestrator | skipping: [testbed-node-4] => (item={'id': '60273192c3bfb26182b48b9bd21fab5c4b548b234c7871d2ce7859bfc1b37d5c', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-12 14:18:08.141044 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f03d767988c80f3a5e30f8a6369a9ce5bd1ee62be2b830d71cd6cc6ff949f1f4', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:08.141062 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'de181bf65e559242d20f83bd9b02256b4ff3c374d1088b52ea1eafba502f8b38', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:08.141073 | orchestrator | skipping: [testbed-node-4] => (item={'id': '11a84052e89a088d7a70ed0a861e4e77f8e6445775f650f037bafacf403e4a7a', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-07-12 14:18:08.141085 | orchestrator | skipping: [testbed-node-4] => (item={'id': '19c7fdec8fd30c1c369ee61eca526a1aa2a1f8a09cf070dad4a4b29612bb4b0c', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-12 14:18:08.141112 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd2247187da8f7b8b6fab94e0b37c7b342ed2ea9b947697a0303b6495ead3d6a', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 14:18:08.141124 | orchestrator | skipping: [testbed-node-4] => (item={'id': '96093230455bb42b3178bd4f5bfa4e9077b2c6207dde56eef4ec95ce0316eb08', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-12 14:18:08.141136 | orchestrator | skipping: [testbed-node-4] => (item={'id': '00593945334a5f2395ca4a6ed94ba76a3a0b711b4646a327307cc422d2bca285', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 14:18:08.141146 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f3d34f6fcce86c8ae6bf9a2e831a0c39eceef957a8833c92f8815feb8dcae328', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 14:18:08.141156 | orchestrator | skipping: [testbed-node-4] => (item={'id': '35315716641e509afb936e0b866246f58297935cc49479c503156787acb83ba6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-12 14:18:08.141167 | orchestrator | ok: [testbed-node-4] => (item={'id': '457d6099a0cff934b165a33b1a6ae64d4ae4c5a0ddc94ad21e78a2036e7719d4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:08.141178 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c0f29ba1164b35f78e179ef251cc5e92f26365eb76e5f33d89b265bef4b28ee0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:08.141229 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f7a6d67b6d4dc6f84e3e93dcd38881904341b7ddfa2144b8fc55e8d6741273b9', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-12 14:18:08.141241 | orchestrator | skipping: [testbed-node-4] => (item={'id': '883dea1da6b114c06bab1aae6fc558e952aa73d357db3378cdb2c6d349c4c20a', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-12 14:18:08.141252 | orchestrator | skipping: [testbed-node-4] => (item={'id': '35f78f5c408818bd25568d69b15dc003c9e6b625ad14e2e15413b2f6ce79dc32', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 14:18:08.141280 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eb08ab4acc7220b0a8f75e8180a3216e8d3f685949511663e98259438f4a3ead', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:08.141291 | orchestrator | skipping: [testbed-node-4] => (item={'id': '35d4392647caeacdb986fb2e7f44dbef0cd7559405d3dce30c649138fae38c8f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:08.141302 | orchestrator | skipping: [testbed-node-4] => (item={'id': '892b64105a7159ce5925d793b2dd7e592ba9372606d2dcf02ce612c35122b5d1', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-12 14:18:08.141313 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0fe2fb16ad7bb0aa1a05bd442f6b4534569562182dfd4845a486d3667449522a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:08.141323 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e7ee03d87a7465ede4dcda7300f3593add6370d3b1aa7ddeaa73c7227877cac7', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-12 14:18:08.141333 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9345f5e5aa4cf614b137ba9049ce997dea51700dd50e2bf733aac86f06df53fd', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-12 14:18:08.141345 | orchestrator | skipping: [testbed-node-5] => (item={'id': '622d07d48e6c69e0997fbf9422c561eb8ff3c3df26aa33d6ae6282a920aa9dbf', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:08.141355 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dc6538b6b30a96c60180401e3c1a9350878a82f7098a4341ceb9fdf5cf49c0c2', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-07-12 14:18:08.141366 | orchestrator | skipping: [testbed-node-5] => (item={'id': '87d8ed6fc05e70dc36fb4f6ff6c9360378b306ead2969ce746df55d0d82501b8', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-07-12 14:18:08.141376 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2483678ed716d470a3d53b288860342ea07663257f876f2253a366649aa75eda', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-12 14:18:08.141387 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b897d03d8ec8b5183c4980c907a689aee0e7c341949b3d279f1ce5d87a40ead9', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 14:18:08.141411 | orchestrator | skipping: [testbed-node-5] => (item={'id': '09e8a3124fe3b08b4e758d7dade208cf64665d49ffdbe171eaaf12cc20c38bae', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-07-12 14:18:08.141421 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ee0d79d5df3ba549b6c55d22839e6ea90fc83ced563bb91a7e6dc5e61dc2d57b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 14:18:08.141431 | orchestrator | skipping: [testbed-node-5] => (item={'id': '22f7ec7ec799c8a69ffa5648835f3773b6d205907f415192173314f20e225d57', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 14:18:08.141441 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b0f2db150261822455007facd6b52993b700ce7de54d0931e23735e7bb043db0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-12 14:18:08.141458 | orchestrator | ok: [testbed-node-5] => (item={'id': '914a149cf0d906c9b6341a2716d733f055c09c5c448e111c50c0afcc342e5f89', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:16.253826 | orchestrator | ok: [testbed-node-5] => (item={'id': '073a82efd45b144f12929d5f8ab00895b0dff665662d74a21751f0e22659a00b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 14:18:16.253929 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3087c17c33c222357f478120d510317502a81ccbcf745694d7f7eeef549eccb6', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-12 14:18:16.253946 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2f1ad903a9c42be3a7c2869dd47b9a83cb3578ed4d838588f1164556368f4902', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-12 14:18:16.253959 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ccff9493643f784c04ea63fcc16a877e6d71776682ac7433220da655cb134019', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 14:18:16.253971 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dc0f6108770b4889a012e6de964660cbbe156efffd8595e68e5774d8ee174b56', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:16.253996 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a3eabd6082185c0915dd2ec7865958d696c3dcc74a11d14f8af63b91cdb86bb1', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 14:18:16.254008 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ab1cc4efbf5733075404fa359a9bfea4ef5f1717c74afb198a0068c0b04d7047', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-12 14:18:16.254069 | orchestrator | 2025-07-12 14:18:16.254083 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-07-12 14:18:16.254095 | orchestrator | Saturday 12 July 2025 14:18:08 +0000 (0:00:00.526) 0:00:04.941 ********* 2025-07-12 14:18:16.254106 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.254117 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:16.254128 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:16.254139 | orchestrator | 2025-07-12 14:18:16.254150 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-07-12 14:18:16.254185 | orchestrator | Saturday 12 July 2025 14:18:08 +0000 (0:00:00.315) 0:00:05.256 ********* 2025-07-12 14:18:16.254197 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.254209 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:16.254219 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:16.254230 | orchestrator | 2025-07-12 14:18:16.254241 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-07-12 14:18:16.254251 | orchestrator | Saturday 12 July 2025 14:18:08 +0000 (0:00:00.291) 0:00:05.548 ********* 2025-07-12 14:18:16.254262 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.254272 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:16.254283 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:16.254293 | orchestrator | 2025-07-12 14:18:16.254304 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:16.254314 | orchestrator | Saturday 12 July 2025 14:18:09 +0000 (0:00:00.543) 0:00:06.091 ********* 2025-07-12 14:18:16.254325 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.254335 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:16.254346 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:16.254356 | orchestrator | 2025-07-12 14:18:16.254368 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-07-12 14:18:16.254381 | orchestrator | Saturday 12 July 2025 14:18:09 +0000 (0:00:00.311) 0:00:06.403 ********* 2025-07-12 14:18:16.254392 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-07-12 14:18:16.254406 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-07-12 14:18:16.254418 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.254431 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-07-12 14:18:16.254443 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-07-12 14:18:16.254455 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:16.254467 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-07-12 14:18:16.254479 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-07-12 14:18:16.254491 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:16.254503 | orchestrator | 2025-07-12 14:18:16.254515 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-07-12 14:18:16.254527 | orchestrator | Saturday 12 July 2025 14:18:09 +0000 (0:00:00.337) 0:00:06.740 ********* 2025-07-12 14:18:16.254539 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.254590 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:16.254604 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:16.254616 | orchestrator | 2025-07-12 14:18:16.254648 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 14:18:16.254661 | orchestrator | Saturday 12 July 2025 14:18:10 +0000 (0:00:00.298) 0:00:07.038 ********* 2025-07-12 14:18:16.254672 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.254684 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:16.254697 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:16.254708 | orchestrator | 2025-07-12 14:18:16.254720 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 14:18:16.254730 | orchestrator | Saturday 12 July 2025 14:18:10 +0000 (0:00:00.526) 0:00:07.564 ********* 2025-07-12 14:18:16.254741 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.254752 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:16.254762 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:16.254772 | orchestrator | 2025-07-12 14:18:16.254783 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-07-12 14:18:16.254794 | orchestrator | Saturday 12 July 2025 14:18:11 +0000 (0:00:00.311) 0:00:07.876 ********* 2025-07-12 14:18:16.254804 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.254838 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:16.254849 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:16.254860 | orchestrator | 2025-07-12 14:18:16.254871 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:18:16.254882 | orchestrator | Saturday 12 July 2025 14:18:11 +0000 (0:00:00.331) 0:00:08.207 ********* 2025-07-12 14:18:16.254893 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.254903 | orchestrator | 2025-07-12 14:18:16.254914 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:18:16.254924 | orchestrator | Saturday 12 July 2025 14:18:11 +0000 (0:00:00.241) 0:00:08.449 ********* 2025-07-12 14:18:16.254935 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.254971 | orchestrator | 2025-07-12 14:18:16.254982 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:18:16.254993 | orchestrator | Saturday 12 July 2025 14:18:11 +0000 (0:00:00.243) 0:00:08.692 ********* 2025-07-12 14:18:16.255003 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.255014 | orchestrator | 2025-07-12 14:18:16.255031 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:16.255042 | orchestrator | Saturday 12 July 2025 14:18:12 +0000 (0:00:00.238) 0:00:08.931 ********* 2025-07-12 14:18:16.255052 | orchestrator | 2025-07-12 14:18:16.255063 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:16.255073 | orchestrator | Saturday 12 July 2025 14:18:12 +0000 (0:00:00.064) 0:00:08.995 ********* 2025-07-12 14:18:16.255084 | orchestrator | 2025-07-12 14:18:16.255095 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:16.255105 | orchestrator | Saturday 12 July 2025 14:18:12 +0000 (0:00:00.061) 0:00:09.056 ********* 2025-07-12 14:18:16.255116 | orchestrator | 2025-07-12 14:18:16.255126 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:18:16.255137 | orchestrator | Saturday 12 July 2025 14:18:12 +0000 (0:00:00.253) 0:00:09.310 ********* 2025-07-12 14:18:16.255147 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.255158 | orchestrator | 2025-07-12 14:18:16.255169 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-07-12 14:18:16.255179 | orchestrator | Saturday 12 July 2025 14:18:12 +0000 (0:00:00.258) 0:00:09.568 ********* 2025-07-12 14:18:16.255190 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.255200 | orchestrator | 2025-07-12 14:18:16.255222 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:16.255233 | orchestrator | Saturday 12 July 2025 14:18:13 +0000 (0:00:00.245) 0:00:09.814 ********* 2025-07-12 14:18:16.255253 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.255264 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:16.255275 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:16.255285 | orchestrator | 2025-07-12 14:18:16.255296 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-07-12 14:18:16.255307 | orchestrator | Saturday 12 July 2025 14:18:13 +0000 (0:00:00.273) 0:00:10.087 ********* 2025-07-12 14:18:16.255318 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.255328 | orchestrator | 2025-07-12 14:18:16.255339 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-07-12 14:18:16.255349 | orchestrator | Saturday 12 July 2025 14:18:13 +0000 (0:00:00.212) 0:00:10.299 ********* 2025-07-12 14:18:16.255360 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 14:18:16.255370 | orchestrator | 2025-07-12 14:18:16.255381 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-07-12 14:18:16.255392 | orchestrator | Saturday 12 July 2025 14:18:15 +0000 (0:00:01.580) 0:00:11.879 ********* 2025-07-12 14:18:16.255402 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.255413 | orchestrator | 2025-07-12 14:18:16.255423 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-07-12 14:18:16.255434 | orchestrator | Saturday 12 July 2025 14:18:15 +0000 (0:00:00.108) 0:00:11.988 ********* 2025-07-12 14:18:16.255452 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.255463 | orchestrator | 2025-07-12 14:18:16.255474 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-07-12 14:18:16.255484 | orchestrator | Saturday 12 July 2025 14:18:15 +0000 (0:00:00.326) 0:00:12.315 ********* 2025-07-12 14:18:16.255495 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:16.255505 | orchestrator | 2025-07-12 14:18:16.255516 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-07-12 14:18:16.255527 | orchestrator | Saturday 12 July 2025 14:18:15 +0000 (0:00:00.106) 0:00:12.421 ********* 2025-07-12 14:18:16.255537 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.255548 | orchestrator | 2025-07-12 14:18:16.255594 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:16.255605 | orchestrator | Saturday 12 July 2025 14:18:15 +0000 (0:00:00.113) 0:00:12.535 ********* 2025-07-12 14:18:16.255616 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:16.255626 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:16.255637 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:16.255647 | orchestrator | 2025-07-12 14:18:16.255658 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-07-12 14:18:16.255677 | orchestrator | Saturday 12 July 2025 14:18:16 +0000 (0:00:00.519) 0:00:13.054 ********* 2025-07-12 14:18:28.642111 | orchestrator | changed: [testbed-node-3] 2025-07-12 14:18:28.642230 | orchestrator | changed: [testbed-node-5] 2025-07-12 14:18:28.642246 | orchestrator | changed: [testbed-node-4] 2025-07-12 14:18:28.642258 | orchestrator | 2025-07-12 14:18:28.642271 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-07-12 14:18:28.642283 | orchestrator | Saturday 12 July 2025 14:18:18 +0000 (0:00:02.323) 0:00:15.377 ********* 2025-07-12 14:18:28.642294 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:28.642306 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:28.642317 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:28.642327 | orchestrator | 2025-07-12 14:18:28.642338 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-07-12 14:18:28.642349 | orchestrator | Saturday 12 July 2025 14:18:18 +0000 (0:00:00.290) 0:00:15.668 ********* 2025-07-12 14:18:28.642360 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:28.642371 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:28.642381 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:28.642392 | orchestrator | 2025-07-12 14:18:28.642403 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-07-12 14:18:28.642414 | orchestrator | Saturday 12 July 2025 14:18:19 +0000 (0:00:00.475) 0:00:16.143 ********* 2025-07-12 14:18:28.642425 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:28.642436 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:28.642446 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:28.642457 | orchestrator | 2025-07-12 14:18:28.642468 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-07-12 14:18:28.642479 | orchestrator | Saturday 12 July 2025 14:18:19 +0000 (0:00:00.528) 0:00:16.671 ********* 2025-07-12 14:18:28.642489 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:28.642500 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:28.642510 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:28.642521 | orchestrator | 2025-07-12 14:18:28.642532 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-07-12 14:18:28.642542 | orchestrator | Saturday 12 July 2025 14:18:20 +0000 (0:00:00.356) 0:00:17.028 ********* 2025-07-12 14:18:28.642553 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:28.642564 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:28.642618 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:28.642630 | orchestrator | 2025-07-12 14:18:28.642642 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-07-12 14:18:28.642654 | orchestrator | Saturday 12 July 2025 14:18:20 +0000 (0:00:00.293) 0:00:17.321 ********* 2025-07-12 14:18:28.642667 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:28.642708 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:28.642721 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:28.642733 | orchestrator | 2025-07-12 14:18:28.642745 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 14:18:28.642758 | orchestrator | Saturday 12 July 2025 14:18:20 +0000 (0:00:00.287) 0:00:17.609 ********* 2025-07-12 14:18:28.642770 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:28.642782 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:28.642794 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:28.642806 | orchestrator | 2025-07-12 14:18:28.642818 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-07-12 14:18:28.642830 | orchestrator | Saturday 12 July 2025 14:18:21 +0000 (0:00:00.711) 0:00:18.321 ********* 2025-07-12 14:18:28.642843 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:28.642855 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:28.642867 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:28.642880 | orchestrator | 2025-07-12 14:18:28.642892 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-07-12 14:18:28.642904 | orchestrator | Saturday 12 July 2025 14:18:21 +0000 (0:00:00.481) 0:00:18.802 ********* 2025-07-12 14:18:28.642916 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:28.642928 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:28.642941 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:28.642952 | orchestrator | 2025-07-12 14:18:28.642965 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-07-12 14:18:28.642977 | orchestrator | Saturday 12 July 2025 14:18:22 +0000 (0:00:00.322) 0:00:19.124 ********* 2025-07-12 14:18:28.642989 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:28.643001 | orchestrator | skipping: [testbed-node-4] 2025-07-12 14:18:28.643012 | orchestrator | skipping: [testbed-node-5] 2025-07-12 14:18:28.643023 | orchestrator | 2025-07-12 14:18:28.643033 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-07-12 14:18:28.643044 | orchestrator | Saturday 12 July 2025 14:18:22 +0000 (0:00:00.284) 0:00:19.409 ********* 2025-07-12 14:18:28.643054 | orchestrator | ok: [testbed-node-3] 2025-07-12 14:18:28.643065 | orchestrator | ok: [testbed-node-4] 2025-07-12 14:18:28.643075 | orchestrator | ok: [testbed-node-5] 2025-07-12 14:18:28.643086 | orchestrator | 2025-07-12 14:18:28.643096 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 14:18:28.643107 | orchestrator | Saturday 12 July 2025 14:18:23 +0000 (0:00:00.529) 0:00:19.939 ********* 2025-07-12 14:18:28.643117 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:28.643128 | orchestrator | 2025-07-12 14:18:28.643139 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 14:18:28.643149 | orchestrator | Saturday 12 July 2025 14:18:23 +0000 (0:00:00.232) 0:00:20.172 ********* 2025-07-12 14:18:28.643160 | orchestrator | skipping: [testbed-node-3] 2025-07-12 14:18:28.643170 | orchestrator | 2025-07-12 14:18:28.643181 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 14:18:28.643191 | orchestrator | Saturday 12 July 2025 14:18:23 +0000 (0:00:00.242) 0:00:20.415 ********* 2025-07-12 14:18:28.643202 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:28.643212 | orchestrator | 2025-07-12 14:18:28.643223 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 14:18:28.643234 | orchestrator | Saturday 12 July 2025 14:18:25 +0000 (0:00:01.625) 0:00:22.040 ********* 2025-07-12 14:18:28.643244 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:28.643255 | orchestrator | 2025-07-12 14:18:28.643265 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 14:18:28.643276 | orchestrator | Saturday 12 July 2025 14:18:25 +0000 (0:00:00.258) 0:00:22.298 ********* 2025-07-12 14:18:28.643305 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:28.643316 | orchestrator | 2025-07-12 14:18:28.643327 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:28.643346 | orchestrator | Saturday 12 July 2025 14:18:25 +0000 (0:00:00.260) 0:00:22.558 ********* 2025-07-12 14:18:28.643356 | orchestrator | 2025-07-12 14:18:28.643413 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:28.643425 | orchestrator | Saturday 12 July 2025 14:18:25 +0000 (0:00:00.066) 0:00:22.625 ********* 2025-07-12 14:18:28.643436 | orchestrator | 2025-07-12 14:18:28.643446 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 14:18:28.643457 | orchestrator | Saturday 12 July 2025 14:18:25 +0000 (0:00:00.066) 0:00:22.691 ********* 2025-07-12 14:18:28.643467 | orchestrator | 2025-07-12 14:18:28.643478 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 14:18:28.643489 | orchestrator | Saturday 12 July 2025 14:18:25 +0000 (0:00:00.082) 0:00:22.773 ********* 2025-07-12 14:18:28.643499 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 14:18:28.643510 | orchestrator | 2025-07-12 14:18:28.643520 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 14:18:28.643531 | orchestrator | Saturday 12 July 2025 14:18:27 +0000 (0:00:01.509) 0:00:24.283 ********* 2025-07-12 14:18:28.643542 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-07-12 14:18:28.643552 | orchestrator |  "msg": [ 2025-07-12 14:18:28.643563 | orchestrator |  "Validator run completed.", 2025-07-12 14:18:28.643601 | orchestrator |  "You can find the report file here:", 2025-07-12 14:18:28.643613 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-07-12T14:18:04+00:00-report.json", 2025-07-12 14:18:28.643625 | orchestrator |  "on the following host:", 2025-07-12 14:18:28.643636 | orchestrator |  "testbed-manager" 2025-07-12 14:18:28.643646 | orchestrator |  ] 2025-07-12 14:18:28.643657 | orchestrator | } 2025-07-12 14:18:28.643668 | orchestrator | 2025-07-12 14:18:28.643684 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:18:28.643696 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-07-12 14:18:28.643708 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:18:28.643719 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 14:18:28.643730 | orchestrator | 2025-07-12 14:18:28.643740 | orchestrator | 2025-07-12 14:18:28.643754 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:18:28.643772 | orchestrator | Saturday 12 July 2025 14:18:28 +0000 (0:00:00.827) 0:00:25.110 ********* 2025-07-12 14:18:28.643790 | orchestrator | =============================================================================== 2025-07-12 14:18:28.643808 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.32s 2025-07-12 14:18:28.643825 | orchestrator | Aggregate test results step one ----------------------------------------- 1.63s 2025-07-12 14:18:28.643843 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.58s 2025-07-12 14:18:28.643860 | orchestrator | Write report file ------------------------------------------------------- 1.51s 2025-07-12 14:18:28.643877 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2025-07-12 14:18:28.643896 | orchestrator | Print report file information ------------------------------------------- 0.83s 2025-07-12 14:18:28.643907 | orchestrator | Prepare test data ------------------------------------------------------- 0.71s 2025-07-12 14:18:28.643917 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-07-12 14:18:28.643928 | orchestrator | Set test result to passed if count matches ------------------------------ 0.54s 2025-07-12 14:18:28.643939 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.53s 2025-07-12 14:18:28.643959 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.53s 2025-07-12 14:18:28.643970 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.53s 2025-07-12 14:18:28.643980 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.53s 2025-07-12 14:18:28.643991 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2025-07-12 14:18:28.644001 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.51s 2025-07-12 14:18:28.644012 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2025-07-12 14:18:28.644023 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.48s 2025-07-12 14:18:28.644034 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2025-07-12 14:18:28.644044 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2025-07-12 14:18:28.644055 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.36s 2025-07-12 14:18:28.943872 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-07-12 14:18:28.950991 | orchestrator | + set -e 2025-07-12 14:18:28.951066 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 14:18:28.951089 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 14:18:28.951110 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 14:18:28.951129 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 14:18:28.951147 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 14:18:28.951168 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 14:18:28.951187 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 14:18:28.951205 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 14:18:28.951224 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 14:18:28.951242 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 14:18:28.951260 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 14:18:28.951278 | orchestrator | ++ export ARA=false 2025-07-12 14:18:28.951297 | orchestrator | ++ ARA=false 2025-07-12 14:18:28.951314 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 14:18:28.951331 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 14:18:28.951348 | orchestrator | ++ export TEMPEST=false 2025-07-12 14:18:28.951366 | orchestrator | ++ TEMPEST=false 2025-07-12 14:18:28.951386 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 14:18:28.951405 | orchestrator | ++ IS_ZUUL=true 2025-07-12 14:18:28.951423 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 14:18:28.951442 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.5 2025-07-12 14:18:28.951461 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 14:18:28.951480 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 14:18:28.951499 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 14:18:28.951516 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 14:18:28.951534 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 14:18:28.951553 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 14:18:28.951613 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 14:18:28.951632 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 14:18:28.951650 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 14:18:28.951668 | orchestrator | + source /etc/os-release 2025-07-12 14:18:28.951684 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-07-12 14:18:28.951694 | orchestrator | ++ NAME=Ubuntu 2025-07-12 14:18:28.951704 | orchestrator | ++ VERSION_ID=24.04 2025-07-12 14:18:28.951715 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-07-12 14:18:28.951725 | orchestrator | ++ VERSION_CODENAME=noble 2025-07-12 14:18:28.951735 | orchestrator | ++ ID=ubuntu 2025-07-12 14:18:28.951746 | orchestrator | ++ ID_LIKE=debian 2025-07-12 14:18:28.951758 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-07-12 14:18:28.951777 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-07-12 14:18:28.951795 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-07-12 14:18:28.951814 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-07-12 14:18:28.951834 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-07-12 14:18:28.951853 | orchestrator | ++ LOGO=ubuntu-logo 2025-07-12 14:18:28.951871 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-07-12 14:18:28.951891 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-07-12 14:18:28.951912 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 14:18:28.976634 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 14:18:50.167487 | orchestrator | 2025-07-12 14:18:50.167659 | orchestrator | # Status of Elasticsearch 2025-07-12 14:18:50.167678 | orchestrator | 2025-07-12 14:18:50.167690 | orchestrator | + pushd /opt/configuration/contrib 2025-07-12 14:18:50.167703 | orchestrator | + echo 2025-07-12 14:18:50.167714 | orchestrator | + echo '# Status of Elasticsearch' 2025-07-12 14:18:50.167725 | orchestrator | + echo 2025-07-12 14:18:50.167736 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-07-12 14:18:50.349920 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-07-12 14:18:50.350070 | orchestrator | 2025-07-12 14:18:50.350088 | orchestrator | # Status of MariaDB 2025-07-12 14:18:50.350101 | orchestrator | 2025-07-12 14:18:50.350112 | orchestrator | + echo 2025-07-12 14:18:50.350124 | orchestrator | + echo '# Status of MariaDB' 2025-07-12 14:18:50.350135 | orchestrator | + echo 2025-07-12 14:18:50.350145 | orchestrator | + MARIADB_USER=root_shard_0 2025-07-12 14:18:50.350157 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-07-12 14:18:50.406282 | orchestrator | Reading package lists... 2025-07-12 14:18:50.745984 | orchestrator | Building dependency tree... 2025-07-12 14:18:50.746449 | orchestrator | Reading state information... 2025-07-12 14:18:51.170308 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-07-12 14:18:51.170416 | orchestrator | bc set to manually installed. 2025-07-12 14:18:51.170433 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-07-12 14:18:51.834489 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-07-12 14:18:51.834954 | orchestrator | 2025-07-12 14:18:51.834989 | orchestrator | # Status of Prometheus 2025-07-12 14:18:51.835001 | orchestrator | + echo 2025-07-12 14:18:51.835013 | orchestrator | + echo '# Status of Prometheus' 2025-07-12 14:18:51.835024 | orchestrator | + echo 2025-07-12 14:18:51.835034 | orchestrator | 2025-07-12 14:18:51.835046 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-07-12 14:18:51.902148 | orchestrator | Unauthorized 2025-07-12 14:18:51.910167 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-07-12 14:18:51.985021 | orchestrator | Unauthorized 2025-07-12 14:18:51.989094 | orchestrator | 2025-07-12 14:18:51.989131 | orchestrator | # Status of RabbitMQ 2025-07-12 14:18:51.989144 | orchestrator | 2025-07-12 14:18:51.989155 | orchestrator | + echo 2025-07-12 14:18:51.989166 | orchestrator | + echo '# Status of RabbitMQ' 2025-07-12 14:18:51.989177 | orchestrator | + echo 2025-07-12 14:18:51.989189 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-07-12 14:18:52.492364 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-07-12 14:18:52.500192 | orchestrator | 2025-07-12 14:18:52.500247 | orchestrator | # Status of Redis 2025-07-12 14:18:52.500262 | orchestrator | 2025-07-12 14:18:52.500273 | orchestrator | + echo 2025-07-12 14:18:52.500285 | orchestrator | + echo '# Status of Redis' 2025-07-12 14:18:52.500302 | orchestrator | + echo 2025-07-12 14:18:52.500323 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-07-12 14:18:52.505927 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002389s;;;0.000000;10.000000 2025-07-12 14:18:52.505976 | orchestrator | + popd 2025-07-12 14:18:52.506236 | orchestrator | 2025-07-12 14:18:52.506261 | orchestrator | # Create backup of MariaDB database 2025-07-12 14:18:52.506274 | orchestrator | 2025-07-12 14:18:52.506285 | orchestrator | + echo 2025-07-12 14:18:52.506296 | orchestrator | + echo '# Create backup of MariaDB database' 2025-07-12 14:18:52.506307 | orchestrator | + echo 2025-07-12 14:18:52.506318 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-07-12 14:18:54.403907 | orchestrator | 2025-07-12 14:18:54 | INFO  | Task 0fdb40e6-731a-4ae0-a55c-8570e6b39f91 (mariadb_backup) was prepared for execution. 2025-07-12 14:18:54.404041 | orchestrator | 2025-07-12 14:18:54 | INFO  | It takes a moment until task 0fdb40e6-731a-4ae0-a55c-8570e6b39f91 (mariadb_backup) has been started and output is visible here. 2025-07-12 14:20:05.334543 | orchestrator | 2025-07-12 14:20:05.334651 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 14:20:05.334667 | orchestrator | 2025-07-12 14:20:05.334727 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 14:20:05.334740 | orchestrator | Saturday 12 July 2025 14:18:58 +0000 (0:00:00.178) 0:00:00.178 ********* 2025-07-12 14:20:05.334752 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:20:05.334764 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:20:05.334775 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:20:05.334786 | orchestrator | 2025-07-12 14:20:05.334797 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 14:20:05.334808 | orchestrator | Saturday 12 July 2025 14:18:58 +0000 (0:00:00.309) 0:00:00.488 ********* 2025-07-12 14:20:05.334819 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 14:20:05.334831 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 14:20:05.334842 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 14:20:05.334854 | orchestrator | 2025-07-12 14:20:05.334865 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 14:20:05.334876 | orchestrator | 2025-07-12 14:20:05.334887 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 14:20:05.334898 | orchestrator | Saturday 12 July 2025 14:18:59 +0000 (0:00:00.595) 0:00:01.083 ********* 2025-07-12 14:20:05.334908 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 14:20:05.334920 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 14:20:05.334931 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 14:20:05.334942 | orchestrator | 2025-07-12 14:20:05.334953 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 14:20:05.334964 | orchestrator | Saturday 12 July 2025 14:18:59 +0000 (0:00:00.380) 0:00:01.464 ********* 2025-07-12 14:20:05.334976 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 14:20:05.334987 | orchestrator | 2025-07-12 14:20:05.334998 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-12 14:20:05.335009 | orchestrator | Saturday 12 July 2025 14:19:00 +0000 (0:00:00.510) 0:00:01.975 ********* 2025-07-12 14:20:05.335020 | orchestrator | ok: [testbed-node-0] 2025-07-12 14:20:05.335031 | orchestrator | ok: [testbed-node-1] 2025-07-12 14:20:05.335042 | orchestrator | ok: [testbed-node-2] 2025-07-12 14:20:05.335053 | orchestrator | 2025-07-12 14:20:05.335064 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-12 14:20:05.335091 | orchestrator | Saturday 12 July 2025 14:19:03 +0000 (0:00:03.010) 0:00:04.985 ********* 2025-07-12 14:20:05.335104 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 14:20:05.335117 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-12 14:20:05.335130 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 14:20:05.335143 | orchestrator | mariadb_bootstrap_restart 2025-07-12 14:20:05.335156 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:20:05.335169 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:20:05.335181 | orchestrator | changed: [testbed-node-0] 2025-07-12 14:20:05.335194 | orchestrator | 2025-07-12 14:20:05.335206 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 14:20:05.335219 | orchestrator | skipping: no hosts matched 2025-07-12 14:20:05.335231 | orchestrator | 2025-07-12 14:20:05.335244 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 14:20:05.335256 | orchestrator | skipping: no hosts matched 2025-07-12 14:20:05.335268 | orchestrator | 2025-07-12 14:20:05.335282 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 14:20:05.335316 | orchestrator | skipping: no hosts matched 2025-07-12 14:20:05.335329 | orchestrator | 2025-07-12 14:20:05.335341 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 14:20:05.335353 | orchestrator | 2025-07-12 14:20:05.335365 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 14:20:05.335378 | orchestrator | Saturday 12 July 2025 14:20:04 +0000 (0:01:01.089) 0:01:06.075 ********* 2025-07-12 14:20:05.335391 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:20:05.335403 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:20:05.335416 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:20:05.335428 | orchestrator | 2025-07-12 14:20:05.335441 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 14:20:05.335453 | orchestrator | Saturday 12 July 2025 14:20:04 +0000 (0:00:00.329) 0:01:06.404 ********* 2025-07-12 14:20:05.335465 | orchestrator | skipping: [testbed-node-0] 2025-07-12 14:20:05.335475 | orchestrator | skipping: [testbed-node-1] 2025-07-12 14:20:05.335486 | orchestrator | skipping: [testbed-node-2] 2025-07-12 14:20:05.335497 | orchestrator | 2025-07-12 14:20:05.335507 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:20:05.335520 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 14:20:05.335532 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 14:20:05.335543 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 14:20:05.335553 | orchestrator | 2025-07-12 14:20:05.335564 | orchestrator | 2025-07-12 14:20:05.335575 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:20:05.335587 | orchestrator | Saturday 12 July 2025 14:20:04 +0000 (0:00:00.260) 0:01:06.665 ********* 2025-07-12 14:20:05.335597 | orchestrator | =============================================================================== 2025-07-12 14:20:05.335608 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 61.09s 2025-07-12 14:20:05.335637 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.01s 2025-07-12 14:20:05.335648 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-07-12 14:20:05.335659 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.51s 2025-07-12 14:20:05.335670 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2025-07-12 14:20:05.335706 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2025-07-12 14:20:05.335718 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-07-12 14:20:05.335728 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.26s 2025-07-12 14:20:05.759349 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-07-12 14:20:05.767496 | orchestrator | + set -e 2025-07-12 14:20:05.767544 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 14:20:05.767558 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 14:20:05.767571 | orchestrator | ++ INTERACTIVE=false 2025-07-12 14:20:05.767581 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 14:20:05.767672 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 14:20:05.767740 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 14:20:05.769010 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 14:20:05.775580 | orchestrator | 2025-07-12 14:20:05.775628 | orchestrator | # OpenStack endpoints 2025-07-12 14:20:05.775639 | orchestrator | 2025-07-12 14:20:05.775651 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 14:20:05.775662 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 14:20:05.775697 | orchestrator | + export OS_CLOUD=admin 2025-07-12 14:20:05.775711 | orchestrator | + OS_CLOUD=admin 2025-07-12 14:20:05.775753 | orchestrator | + echo 2025-07-12 14:20:05.775765 | orchestrator | + echo '# OpenStack endpoints' 2025-07-12 14:20:05.775776 | orchestrator | + echo 2025-07-12 14:20:05.775787 | orchestrator | + openstack endpoint list 2025-07-12 14:20:09.224430 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 14:20:09.224562 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-07-12 14:20:09.224586 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 14:20:09.224602 | orchestrator | | 260bf4dc4ef9439ba97b286bae512afb | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-07-12 14:20:09.224618 | orchestrator | | 2b163a19e0e84201b941d00f3619ea51 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 14:20:09.224634 | orchestrator | | 3270ee89e3904748aeddcf222b5f7a15 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-07-12 14:20:09.224650 | orchestrator | | 3f03f930433e4b0a9ed92c57f96bea9e | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-07-12 14:20:09.224666 | orchestrator | | 48a68343b7334e3ea8fd2608b7c2a08d | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-07-12 14:20:09.224769 | orchestrator | | 5600e41d1c9a43e4aa0b25d6c80e4931 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-07-12 14:20:09.224791 | orchestrator | | 674ee035542049ccb7e99fed1114bc29 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-07-12 14:20:09.224807 | orchestrator | | 67e5612e0ec141e5935e320b6f9fb2ad | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-07-12 14:20:09.224823 | orchestrator | | 6b496110b4bc4e33b564ed6d906e950b | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 14:20:09.224838 | orchestrator | | 6f9eea8390d1434c8436ba83fc84efa3 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-07-12 14:20:09.224855 | orchestrator | | 76876e8df5a64cf78f191879c0a561ca | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-07-12 14:20:09.224872 | orchestrator | | 816559a43cce4721baa622b355524724 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-07-12 14:20:09.224914 | orchestrator | | 834ec82ba67047a2b272f6eaa704b762 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-07-12 14:20:09.224932 | orchestrator | | 842605a411a34a07bae6f3aee654d438 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 14:20:09.224949 | orchestrator | | 95d588bd40904deb90744897febd409a | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-07-12 14:20:09.224968 | orchestrator | | a2d007246c2d4eca8db67de64c64a381 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-07-12 14:20:09.224986 | orchestrator | | ac96004d00744c0bad10016ed1e626a8 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-07-12 14:20:09.225032 | orchestrator | | b36bd4e591ab4c48a31f03758ac9c740 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 14:20:09.225048 | orchestrator | | c78653b493b64e968127b2e5941cf2ad | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-07-12 14:20:09.225064 | orchestrator | | d4a005e015544c4abe6c2406db37e0c9 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-07-12 14:20:09.225105 | orchestrator | | f6a2fc3035ee449f80cc0b05e0b02891 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-07-12 14:20:09.225122 | orchestrator | | f7b2d65f74ea40bbb8a518ce6a331747 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-07-12 14:20:09.225138 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 14:20:09.483246 | orchestrator | 2025-07-12 14:20:09.483364 | orchestrator | # Cinder 2025-07-12 14:20:09.483380 | orchestrator | 2025-07-12 14:20:09.483392 | orchestrator | + echo 2025-07-12 14:20:09.483404 | orchestrator | + echo '# Cinder' 2025-07-12 14:20:09.483415 | orchestrator | + echo 2025-07-12 14:20:09.483428 | orchestrator | + openstack volume service list 2025-07-12 14:20:12.180227 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:12.180354 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 14:20:12.180371 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:12.180383 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T14:20:03.000000 | 2025-07-12 14:20:12.180394 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T14:20:03.000000 | 2025-07-12 14:20:12.180405 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T14:20:04.000000 | 2025-07-12 14:20:12.180416 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-07-12T14:20:03.000000 | 2025-07-12 14:20:12.180427 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-07-12T14:20:03.000000 | 2025-07-12 14:20:12.180437 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-07-12T14:20:06.000000 | 2025-07-12 14:20:12.180448 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-07-12T14:20:03.000000 | 2025-07-12 14:20:12.180459 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-07-12T14:20:03.000000 | 2025-07-12 14:20:12.180469 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-07-12T14:20:04.000000 | 2025-07-12 14:20:12.180480 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:12.441924 | orchestrator | 2025-07-12 14:20:12.442076 | orchestrator | # Neutron 2025-07-12 14:20:12.442095 | orchestrator | 2025-07-12 14:20:12.442108 | orchestrator | + echo 2025-07-12 14:20:12.442119 | orchestrator | + echo '# Neutron' 2025-07-12 14:20:12.442131 | orchestrator | + echo 2025-07-12 14:20:12.442142 | orchestrator | + openstack network agent list 2025-07-12 14:20:15.591921 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 14:20:15.592010 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-07-12 14:20:15.592042 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 14:20:15.592049 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-07-12 14:20:15.592056 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-07-12 14:20:15.592062 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-07-12 14:20:15.592068 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-07-12 14:20:15.592074 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-07-12 14:20:15.592080 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-07-12 14:20:15.592086 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 14:20:15.592092 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 14:20:15.592098 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 14:20:15.592105 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 14:20:15.874822 | orchestrator | + openstack network service provider list 2025-07-12 14:20:18.600750 | orchestrator | +---------------+------+---------+ 2025-07-12 14:20:18.600857 | orchestrator | | Service Type | Name | Default | 2025-07-12 14:20:18.600871 | orchestrator | +---------------+------+---------+ 2025-07-12 14:20:18.600882 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-07-12 14:20:18.600894 | orchestrator | +---------------+------+---------+ 2025-07-12 14:20:18.887497 | orchestrator | 2025-07-12 14:20:18.887595 | orchestrator | # Nova 2025-07-12 14:20:18.887610 | orchestrator | 2025-07-12 14:20:18.887622 | orchestrator | + echo 2025-07-12 14:20:18.887633 | orchestrator | + echo '# Nova' 2025-07-12 14:20:18.887644 | orchestrator | + echo 2025-07-12 14:20:18.887655 | orchestrator | + openstack compute service list 2025-07-12 14:20:21.560684 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:21.560862 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 14:20:21.560878 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:21.560911 | orchestrator | | 2d249aa0-191d-4a84-990f-c57fe1e0c986 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T14:20:21.000000 | 2025-07-12 14:20:21.560924 | orchestrator | | ffc2d911-4cd1-4d09-ad75-df0c24aee22c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T14:20:15.000000 | 2025-07-12 14:20:21.560977 | orchestrator | | 2b10c1b6-36bc-4404-b998-d1d393eea07b | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T14:20:16.000000 | 2025-07-12 14:20:21.560990 | orchestrator | | 32540ba3-ca4b-430e-85df-67c832112089 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-07-12T14:20:17.000000 | 2025-07-12 14:20:21.561001 | orchestrator | | 1cf801f5-ebe9-4104-9652-4db9e60e2772 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-07-12T14:20:20.000000 | 2025-07-12 14:20:21.561012 | orchestrator | | 06533aee-d1ec-4981-b9fc-f074e15bb49b | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-07-12T14:20:21.000000 | 2025-07-12 14:20:21.561046 | orchestrator | | c9a61fa1-f95b-4b6e-aaab-a7e929d4f21e | nova-compute | testbed-node-5 | nova | enabled | up | 2025-07-12T14:20:15.000000 | 2025-07-12 14:20:21.561057 | orchestrator | | 64a49ee4-ec42-4e06-8117-795e9f38fdbe | nova-compute | testbed-node-4 | nova | enabled | up | 2025-07-12T14:20:16.000000 | 2025-07-12 14:20:21.561067 | orchestrator | | 0ad8c708-ff2b-4e51-a72e-9f6764bf6c92 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-07-12T14:20:18.000000 | 2025-07-12 14:20:21.561078 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 14:20:21.962345 | orchestrator | + openstack hypervisor list 2025-07-12 14:20:26.790638 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 14:20:26.790773 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-07-12 14:20:26.790789 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 14:20:26.790801 | orchestrator | | 57e6ced9-ad9b-48e7-aaa6-05b7c32eb336 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-07-12 14:20:26.790813 | orchestrator | | 46fd8ca0-1018-4612-a8eb-a0b29b82a12a | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-07-12 14:20:26.790824 | orchestrator | | c43c4059-354a-489f-86ef-5133867261a3 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-07-12 14:20:26.790835 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 14:20:27.079478 | orchestrator | 2025-07-12 14:20:27.079575 | orchestrator | # Run OpenStack test play 2025-07-12 14:20:27.079590 | orchestrator | 2025-07-12 14:20:27.079601 | orchestrator | + echo 2025-07-12 14:20:27.079613 | orchestrator | + echo '# Run OpenStack test play' 2025-07-12 14:20:27.079625 | orchestrator | + echo 2025-07-12 14:20:27.079636 | orchestrator | + osism apply --environment openstack test 2025-07-12 14:20:28.838640 | orchestrator | 2025-07-12 14:20:28 | INFO  | Trying to run play test in environment openstack 2025-07-12 14:20:38.993881 | orchestrator | 2025-07-12 14:20:38 | INFO  | Task 6681c6f5-f932-4015-a63d-906e6e967331 (test) was prepared for execution. 2025-07-12 14:20:38.994125 | orchestrator | 2025-07-12 14:20:38 | INFO  | It takes a moment until task 6681c6f5-f932-4015-a63d-906e6e967331 (test) has been started and output is visible here. 2025-07-12 14:26:43.771170 | orchestrator | 2025-07-12 14:26:43.771289 | orchestrator | PLAY [Create test project] ***************************************************** 2025-07-12 14:26:43.771307 | orchestrator | 2025-07-12 14:26:43.771319 | orchestrator | TASK [Create test domain] ****************************************************** 2025-07-12 14:26:43.771331 | orchestrator | Saturday 12 July 2025 14:20:42 +0000 (0:00:00.076) 0:00:00.076 ********* 2025-07-12 14:26:43.771341 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771353 | orchestrator | 2025-07-12 14:26:43.771364 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-07-12 14:26:43.771375 | orchestrator | Saturday 12 July 2025 14:20:46 +0000 (0:00:03.779) 0:00:03.856 ********* 2025-07-12 14:26:43.771386 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771396 | orchestrator | 2025-07-12 14:26:43.771407 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-07-12 14:26:43.771418 | orchestrator | Saturday 12 July 2025 14:20:50 +0000 (0:00:04.161) 0:00:08.017 ********* 2025-07-12 14:26:43.771429 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771440 | orchestrator | 2025-07-12 14:26:43.771451 | orchestrator | TASK [Create test project] ***************************************************** 2025-07-12 14:26:43.771462 | orchestrator | Saturday 12 July 2025 14:20:56 +0000 (0:00:06.375) 0:00:14.392 ********* 2025-07-12 14:26:43.771472 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771483 | orchestrator | 2025-07-12 14:26:43.771494 | orchestrator | TASK [Create test user] ******************************************************** 2025-07-12 14:26:43.771505 | orchestrator | Saturday 12 July 2025 14:21:00 +0000 (0:00:04.014) 0:00:18.406 ********* 2025-07-12 14:26:43.771539 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771550 | orchestrator | 2025-07-12 14:26:43.771561 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-07-12 14:26:43.771572 | orchestrator | Saturday 12 July 2025 14:21:05 +0000 (0:00:04.130) 0:00:22.537 ********* 2025-07-12 14:26:43.771583 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-07-12 14:26:43.771593 | orchestrator | changed: [localhost] => (item=member) 2025-07-12 14:26:43.771605 | orchestrator | changed: [localhost] => (item=creator) 2025-07-12 14:26:43.771615 | orchestrator | 2025-07-12 14:26:43.771626 | orchestrator | TASK [Create test server group] ************************************************ 2025-07-12 14:26:43.771651 | orchestrator | Saturday 12 July 2025 14:21:17 +0000 (0:00:11.963) 0:00:34.501 ********* 2025-07-12 14:26:43.771662 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771673 | orchestrator | 2025-07-12 14:26:43.771684 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-07-12 14:26:43.771694 | orchestrator | Saturday 12 July 2025 14:21:21 +0000 (0:00:04.386) 0:00:38.888 ********* 2025-07-12 14:26:43.771707 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771719 | orchestrator | 2025-07-12 14:26:43.771731 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-07-12 14:26:43.771743 | orchestrator | Saturday 12 July 2025 14:21:26 +0000 (0:00:05.100) 0:00:43.988 ********* 2025-07-12 14:26:43.771755 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771767 | orchestrator | 2025-07-12 14:26:43.771778 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-07-12 14:26:43.771790 | orchestrator | Saturday 12 July 2025 14:21:30 +0000 (0:00:04.132) 0:00:48.120 ********* 2025-07-12 14:26:43.771802 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771815 | orchestrator | 2025-07-12 14:26:43.771828 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-07-12 14:26:43.771838 | orchestrator | Saturday 12 July 2025 14:21:34 +0000 (0:00:03.885) 0:00:52.006 ********* 2025-07-12 14:26:43.771849 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771859 | orchestrator | 2025-07-12 14:26:43.771870 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-07-12 14:26:43.771880 | orchestrator | Saturday 12 July 2025 14:21:38 +0000 (0:00:04.168) 0:00:56.174 ********* 2025-07-12 14:26:43.771891 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771901 | orchestrator | 2025-07-12 14:26:43.771912 | orchestrator | TASK [Create test network topology] ******************************************** 2025-07-12 14:26:43.771923 | orchestrator | Saturday 12 July 2025 14:21:42 +0000 (0:00:03.875) 0:01:00.049 ********* 2025-07-12 14:26:43.771934 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.771972 | orchestrator | 2025-07-12 14:26:43.771988 | orchestrator | TASK [Create test instances] *************************************************** 2025-07-12 14:26:43.772005 | orchestrator | Saturday 12 July 2025 14:21:58 +0000 (0:00:16.385) 0:01:16.435 ********* 2025-07-12 14:26:43.772022 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 14:26:43.772039 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 14:26:43.772057 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 14:26:43.772075 | orchestrator | 2025-07-12 14:26:43.772087 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 14:26:43.772098 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 14:26:43.772108 | orchestrator | 2025-07-12 14:26:43.772119 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 14:26:43.772129 | orchestrator | 2025-07-12 14:26:43.772140 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 14:26:43.772151 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 14:26:43.772161 | orchestrator | 2025-07-12 14:26:43.772171 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-07-12 14:26:43.772182 | orchestrator | Saturday 12 July 2025 14:25:19 +0000 (0:03:20.423) 0:04:36.859 ********* 2025-07-12 14:26:43.772202 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 14:26:43.772213 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 14:26:43.772223 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 14:26:43.772234 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 14:26:43.772244 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 14:26:43.772255 | orchestrator | 2025-07-12 14:26:43.772265 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-07-12 14:26:43.772276 | orchestrator | Saturday 12 July 2025 14:25:43 +0000 (0:00:24.121) 0:05:00.981 ********* 2025-07-12 14:26:43.772287 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 14:26:43.772298 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 14:26:43.772328 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 14:26:43.772340 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 14:26:43.772350 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 14:26:43.772361 | orchestrator | 2025-07-12 14:26:43.772371 | orchestrator | TASK [Create test volume] ****************************************************** 2025-07-12 14:26:43.772382 | orchestrator | Saturday 12 July 2025 14:26:17 +0000 (0:00:33.654) 0:05:34.636 ********* 2025-07-12 14:26:43.772393 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.772404 | orchestrator | 2025-07-12 14:26:43.772414 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-07-12 14:26:43.772425 | orchestrator | Saturday 12 July 2025 14:26:24 +0000 (0:00:07.486) 0:05:42.122 ********* 2025-07-12 14:26:43.772435 | orchestrator | changed: [localhost] 2025-07-12 14:26:43.772446 | orchestrator | 2025-07-12 14:26:43.772456 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-07-12 14:26:43.772467 | orchestrator | Saturday 12 July 2025 14:26:38 +0000 (0:00:13.574) 0:05:55.697 ********* 2025-07-12 14:26:43.772477 | orchestrator | ok: [localhost] 2025-07-12 14:26:43.772488 | orchestrator | 2025-07-12 14:26:43.772498 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-07-12 14:26:43.772509 | orchestrator | Saturday 12 July 2025 14:26:43 +0000 (0:00:05.200) 0:06:00.897 ********* 2025-07-12 14:26:43.772519 | orchestrator | ok: [localhost] => { 2025-07-12 14:26:43.772530 | orchestrator |  "msg": "192.168.112.159" 2025-07-12 14:26:43.772541 | orchestrator | } 2025-07-12 14:26:43.772552 | orchestrator | 2025-07-12 14:26:43.772567 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 14:26:43.772578 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 14:26:43.772590 | orchestrator | 2025-07-12 14:26:43.772600 | orchestrator | 2025-07-12 14:26:43.772611 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 14:26:43.772622 | orchestrator | Saturday 12 July 2025 14:26:43 +0000 (0:00:00.045) 0:06:00.943 ********* 2025-07-12 14:26:43.772632 | orchestrator | =============================================================================== 2025-07-12 14:26:43.772643 | orchestrator | Create test instances ------------------------------------------------- 200.42s 2025-07-12 14:26:43.772654 | orchestrator | Add tag to instances --------------------------------------------------- 33.65s 2025-07-12 14:26:43.772712 | orchestrator | Add metadata to instances ---------------------------------------------- 24.12s 2025-07-12 14:26:43.772725 | orchestrator | Create test network topology ------------------------------------------- 16.39s 2025-07-12 14:26:43.772736 | orchestrator | Attach test volume ----------------------------------------------------- 13.57s 2025-07-12 14:26:43.772747 | orchestrator | Add member roles to user test ------------------------------------------ 11.96s 2025-07-12 14:26:43.772758 | orchestrator | Create test volume ------------------------------------------------------ 7.49s 2025-07-12 14:26:43.772768 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.38s 2025-07-12 14:26:43.772779 | orchestrator | Create floating ip address ---------------------------------------------- 5.20s 2025-07-12 14:26:43.772789 | orchestrator | Create ssh security group ----------------------------------------------- 5.10s 2025-07-12 14:26:43.772807 | orchestrator | Create test server group ------------------------------------------------ 4.39s 2025-07-12 14:26:43.772818 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.17s 2025-07-12 14:26:43.772828 | orchestrator | Create test-admin user -------------------------------------------------- 4.16s 2025-07-12 14:26:43.772839 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.13s 2025-07-12 14:26:43.772849 | orchestrator | Create test user -------------------------------------------------------- 4.13s 2025-07-12 14:26:43.772860 | orchestrator | Create test project ----------------------------------------------------- 4.01s 2025-07-12 14:26:43.772870 | orchestrator | Create icmp security group ---------------------------------------------- 3.89s 2025-07-12 14:26:43.772881 | orchestrator | Create test keypair ----------------------------------------------------- 3.88s 2025-07-12 14:26:43.772891 | orchestrator | Create test domain ------------------------------------------------------ 3.78s 2025-07-12 14:26:43.772902 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-07-12 14:26:44.069439 | orchestrator | + server_list 2025-07-12 14:26:44.069542 | orchestrator | + openstack --os-cloud test server list 2025-07-12 14:26:47.703240 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 14:26:47.703343 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-07-12 14:26:47.703357 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 14:26:47.703369 | orchestrator | | 27e4ee80-e0e1-49ed-8288-298fc5410447 | test-4 | ACTIVE | auto_allocated_network=10.42.0.59, 192.168.112.120 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:47.703380 | orchestrator | | e17d4f7b-4bc6-401f-a5df-0ca71650af27 | test-3 | ACTIVE | auto_allocated_network=10.42.0.14, 192.168.112.105 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:47.703390 | orchestrator | | 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 | test-2 | ACTIVE | auto_allocated_network=10.42.0.30, 192.168.112.141 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:47.703401 | orchestrator | | 8a9fa1eb-0f17-4fa9-be08-066123879c47 | test-1 | ACTIVE | auto_allocated_network=10.42.0.46, 192.168.112.184 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:47.703412 | orchestrator | | 13302c0d-c25c-4b96-9be6-85da1a96ef57 | test | ACTIVE | auto_allocated_network=10.42.0.6, 192.168.112.159 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 14:26:47.703423 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 14:26:47.989801 | orchestrator | + openstack --os-cloud test server show test 2025-07-12 14:26:51.413191 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:51.413298 | orchestrator | | Field | Value | 2025-07-12 14:26:51.413315 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:51.413332 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:26:51.413361 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:26:51.413373 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:26:51.413384 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-07-12 14:26:51.413395 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:26:51.413407 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:26:51.413418 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:26:51.413433 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:26:51.413473 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:26:51.413486 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:26:51.413497 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:26:51.413515 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:26:51.413530 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:26:51.413541 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:26:51.413552 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:26:51.413563 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:22:30.000000 | 2025-07-12 14:26:51.413573 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:26:51.413584 | orchestrator | | accessIPv4 | | 2025-07-12 14:26:51.413595 | orchestrator | | accessIPv6 | | 2025-07-12 14:26:51.413606 | orchestrator | | addresses | auto_allocated_network=10.42.0.6, 192.168.112.159 | 2025-07-12 14:26:51.413624 | orchestrator | | config_drive | | 2025-07-12 14:26:51.413635 | orchestrator | | created | 2025-07-12T14:22:07Z | 2025-07-12 14:26:51.413653 | orchestrator | | description | None | 2025-07-12 14:26:51.413668 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:26:51.413679 | orchestrator | | hostId | 0dab8198abddfbeb3bc6e416014de08fc0f385a429fa77e8e4c8fae8 | 2025-07-12 14:26:51.413690 | orchestrator | | host_status | None | 2025-07-12 14:26:51.413700 | orchestrator | | id | 13302c0d-c25c-4b96-9be6-85da1a96ef57 | 2025-07-12 14:26:51.413711 | orchestrator | | image | Cirros 0.6.2 (1416e678-cd81-415d-951c-984e50bf2970) | 2025-07-12 14:26:51.413722 | orchestrator | | key_name | test | 2025-07-12 14:26:51.413733 | orchestrator | | locked | False | 2025-07-12 14:26:51.413743 | orchestrator | | locked_reason | None | 2025-07-12 14:26:51.413754 | orchestrator | | name | test | 2025-07-12 14:26:51.413771 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:26:51.413788 | orchestrator | | progress | 0 | 2025-07-12 14:26:51.413799 | orchestrator | | project_id | 8b4bc4ea7dd741e2bcbe8872d56604fd | 2025-07-12 14:26:51.413814 | orchestrator | | properties | hostname='test' | 2025-07-12 14:26:51.413825 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:26:51.413836 | orchestrator | | | name='ssh' | 2025-07-12 14:26:51.413846 | orchestrator | | server_groups | None | 2025-07-12 14:26:51.413857 | orchestrator | | status | ACTIVE | 2025-07-12 14:26:51.413868 | orchestrator | | tags | test | 2025-07-12 14:26:51.413973 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:26:51.413996 | orchestrator | | updated | 2025-07-12T14:25:24Z | 2025-07-12 14:26:51.414091 | orchestrator | | user_id | 1b68035535054e038efc8918d8d4ecad | 2025-07-12 14:26:51.414122 | orchestrator | | volumes_attached | delete_on_termination='False', id='0ff3b515-50b4-445b-baba-f1dd59102680' | 2025-07-12 14:26:51.416764 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:51.674276 | orchestrator | + openstack --os-cloud test server show test-1 2025-07-12 14:26:54.814376 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:54.814506 | orchestrator | | Field | Value | 2025-07-12 14:26:54.814524 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:54.814548 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:26:54.814560 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:26:54.814572 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:26:54.814583 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-07-12 14:26:54.814614 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:26:54.814651 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:26:54.814663 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:26:54.814674 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:26:54.814704 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:26:54.814720 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:26:54.814732 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:26:54.814742 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:26:54.814754 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:26:54.814764 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:26:54.814775 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:26:54.814786 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:23:11.000000 | 2025-07-12 14:26:54.814804 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:26:54.814815 | orchestrator | | accessIPv4 | | 2025-07-12 14:26:54.814826 | orchestrator | | accessIPv6 | | 2025-07-12 14:26:54.814837 | orchestrator | | addresses | auto_allocated_network=10.42.0.46, 192.168.112.184 | 2025-07-12 14:26:54.814855 | orchestrator | | config_drive | | 2025-07-12 14:26:54.814872 | orchestrator | | created | 2025-07-12T14:22:51Z | 2025-07-12 14:26:54.814883 | orchestrator | | description | None | 2025-07-12 14:26:54.814894 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:26:54.814906 | orchestrator | | hostId | 3d03c89f3cc13e8217be25db3863fed50f20ca1034f4303eb43887fd | 2025-07-12 14:26:54.814917 | orchestrator | | host_status | None | 2025-07-12 14:26:54.814928 | orchestrator | | id | 8a9fa1eb-0f17-4fa9-be08-066123879c47 | 2025-07-12 14:26:54.814945 | orchestrator | | image | Cirros 0.6.2 (1416e678-cd81-415d-951c-984e50bf2970) | 2025-07-12 14:26:54.815012 | orchestrator | | key_name | test | 2025-07-12 14:26:54.815024 | orchestrator | | locked | False | 2025-07-12 14:26:54.815035 | orchestrator | | locked_reason | None | 2025-07-12 14:26:54.815046 | orchestrator | | name | test-1 | 2025-07-12 14:26:54.815064 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:26:54.815081 | orchestrator | | progress | 0 | 2025-07-12 14:26:54.815092 | orchestrator | | project_id | 8b4bc4ea7dd741e2bcbe8872d56604fd | 2025-07-12 14:26:54.815103 | orchestrator | | properties | hostname='test-1' | 2025-07-12 14:26:54.815114 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:26:54.815125 | orchestrator | | | name='ssh' | 2025-07-12 14:26:54.815143 | orchestrator | | server_groups | None | 2025-07-12 14:26:54.815154 | orchestrator | | status | ACTIVE | 2025-07-12 14:26:54.815165 | orchestrator | | tags | test | 2025-07-12 14:26:54.815176 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:26:54.815187 | orchestrator | | updated | 2025-07-12T14:25:28Z | 2025-07-12 14:26:54.815203 | orchestrator | | user_id | 1b68035535054e038efc8918d8d4ecad | 2025-07-12 14:26:54.815219 | orchestrator | | volumes_attached | | 2025-07-12 14:26:54.819690 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:55.100878 | orchestrator | + openstack --os-cloud test server show test-2 2025-07-12 14:26:58.223892 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:58.224034 | orchestrator | | Field | Value | 2025-07-12 14:26:58.224075 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:58.224088 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:26:58.224099 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:26:58.224110 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:26:58.224121 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-07-12 14:26:58.224132 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:26:58.224143 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:26:58.224155 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:26:58.224166 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:26:58.224195 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:26:58.224208 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:26:58.224227 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:26:58.224238 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:26:58.224249 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:26:58.224260 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:26:58.224271 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:26:58.224281 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:23:53.000000 | 2025-07-12 14:26:58.224292 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:26:58.224303 | orchestrator | | accessIPv4 | | 2025-07-12 14:26:58.224336 | orchestrator | | accessIPv6 | | 2025-07-12 14:26:58.224348 | orchestrator | | addresses | auto_allocated_network=10.42.0.30, 192.168.112.141 | 2025-07-12 14:26:58.224367 | orchestrator | | config_drive | | 2025-07-12 14:26:58.224391 | orchestrator | | created | 2025-07-12T14:23:30Z | 2025-07-12 14:26:58.224403 | orchestrator | | description | None | 2025-07-12 14:26:58.224414 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:26:58.224425 | orchestrator | | hostId | 7eaacadf878f12a0ae73290cf2c815983dc7531a72085df34f502c72 | 2025-07-12 14:26:58.224435 | orchestrator | | host_status | None | 2025-07-12 14:26:58.224473 | orchestrator | | id | 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 | 2025-07-12 14:26:58.224484 | orchestrator | | image | Cirros 0.6.2 (1416e678-cd81-415d-951c-984e50bf2970) | 2025-07-12 14:26:58.224495 | orchestrator | | key_name | test | 2025-07-12 14:26:58.224506 | orchestrator | | locked | False | 2025-07-12 14:26:58.224522 | orchestrator | | locked_reason | None | 2025-07-12 14:26:58.224533 | orchestrator | | name | test-2 | 2025-07-12 14:26:58.224558 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:26:58.224570 | orchestrator | | progress | 0 | 2025-07-12 14:26:58.224581 | orchestrator | | project_id | 8b4bc4ea7dd741e2bcbe8872d56604fd | 2025-07-12 14:26:58.224591 | orchestrator | | properties | hostname='test-2' | 2025-07-12 14:26:58.224602 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:26:58.224613 | orchestrator | | | name='ssh' | 2025-07-12 14:26:58.224624 | orchestrator | | server_groups | None | 2025-07-12 14:26:58.224635 | orchestrator | | status | ACTIVE | 2025-07-12 14:26:58.224645 | orchestrator | | tags | test | 2025-07-12 14:26:58.224656 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:26:58.224678 | orchestrator | | updated | 2025-07-12T14:25:33Z | 2025-07-12 14:26:58.224694 | orchestrator | | user_id | 1b68035535054e038efc8918d8d4ecad | 2025-07-12 14:26:58.224705 | orchestrator | | volumes_attached | | 2025-07-12 14:26:58.228412 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:26:58.512453 | orchestrator | + openstack --os-cloud test server show test-3 2025-07-12 14:27:01.589782 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:27:01.589891 | orchestrator | | Field | Value | 2025-07-12 14:27:01.589907 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:27:01.589918 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:27:01.589930 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:27:01.589941 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:27:01.589951 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-07-12 14:27:01.590081 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:27:01.590097 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:27:01.590108 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:27:01.590119 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:27:01.590150 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:27:01.590162 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:27:01.590173 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:27:01.590184 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:27:01.590195 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:27:01.590206 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:27:01.590216 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:27:01.590235 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:24:29.000000 | 2025-07-12 14:27:01.590252 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:27:01.590263 | orchestrator | | accessIPv4 | | 2025-07-12 14:27:01.590274 | orchestrator | | accessIPv6 | | 2025-07-12 14:27:01.590285 | orchestrator | | addresses | auto_allocated_network=10.42.0.14, 192.168.112.105 | 2025-07-12 14:27:01.590302 | orchestrator | | config_drive | | 2025-07-12 14:27:01.590314 | orchestrator | | created | 2025-07-12T14:24:14Z | 2025-07-12 14:27:01.590327 | orchestrator | | description | None | 2025-07-12 14:27:01.590339 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:27:01.590352 | orchestrator | | hostId | 3d03c89f3cc13e8217be25db3863fed50f20ca1034f4303eb43887fd | 2025-07-12 14:27:01.590364 | orchestrator | | host_status | None | 2025-07-12 14:27:01.590385 | orchestrator | | id | e17d4f7b-4bc6-401f-a5df-0ca71650af27 | 2025-07-12 14:27:01.590397 | orchestrator | | image | Cirros 0.6.2 (1416e678-cd81-415d-951c-984e50bf2970) | 2025-07-12 14:27:01.590414 | orchestrator | | key_name | test | 2025-07-12 14:27:01.590428 | orchestrator | | locked | False | 2025-07-12 14:27:01.590441 | orchestrator | | locked_reason | None | 2025-07-12 14:27:01.590453 | orchestrator | | name | test-3 | 2025-07-12 14:27:01.590472 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:27:01.590486 | orchestrator | | progress | 0 | 2025-07-12 14:27:01.590499 | orchestrator | | project_id | 8b4bc4ea7dd741e2bcbe8872d56604fd | 2025-07-12 14:27:01.590511 | orchestrator | | properties | hostname='test-3' | 2025-07-12 14:27:01.590531 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:27:01.590544 | orchestrator | | | name='ssh' | 2025-07-12 14:27:01.590556 | orchestrator | | server_groups | None | 2025-07-12 14:27:01.590568 | orchestrator | | status | ACTIVE | 2025-07-12 14:27:01.590589 | orchestrator | | tags | test | 2025-07-12 14:27:01.590601 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:27:01.590615 | orchestrator | | updated | 2025-07-12T14:25:38Z | 2025-07-12 14:27:01.590633 | orchestrator | | user_id | 1b68035535054e038efc8918d8d4ecad | 2025-07-12 14:27:01.590647 | orchestrator | | volumes_attached | | 2025-07-12 14:27:01.595471 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:27:01.877925 | orchestrator | + openstack --os-cloud test server show test-4 2025-07-12 14:27:05.073389 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:27:05.073522 | orchestrator | | Field | Value | 2025-07-12 14:27:05.073546 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:27:05.073560 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 14:27:05.073575 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 14:27:05.073605 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 14:27:05.073620 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-07-12 14:27:05.073634 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 14:27:05.073648 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 14:27:05.073662 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 14:27:05.073677 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 14:27:05.073711 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 14:27:05.073734 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 14:27:05.073742 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 14:27:05.073751 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 14:27:05.073759 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 14:27:05.073766 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 14:27:05.073779 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 14:27:05.073787 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T14:25:03.000000 | 2025-07-12 14:27:05.073795 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 14:27:05.073803 | orchestrator | | accessIPv4 | | 2025-07-12 14:27:05.073811 | orchestrator | | accessIPv6 | | 2025-07-12 14:27:05.073825 | orchestrator | | addresses | auto_allocated_network=10.42.0.59, 192.168.112.120 | 2025-07-12 14:27:05.073839 | orchestrator | | config_drive | | 2025-07-12 14:27:05.073847 | orchestrator | | created | 2025-07-12T14:24:46Z | 2025-07-12 14:27:05.073855 | orchestrator | | description | None | 2025-07-12 14:27:05.073863 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 14:27:05.073871 | orchestrator | | hostId | 0dab8198abddfbeb3bc6e416014de08fc0f385a429fa77e8e4c8fae8 | 2025-07-12 14:27:05.073883 | orchestrator | | host_status | None | 2025-07-12 14:27:05.073891 | orchestrator | | id | 27e4ee80-e0e1-49ed-8288-298fc5410447 | 2025-07-12 14:27:05.073899 | orchestrator | | image | Cirros 0.6.2 (1416e678-cd81-415d-951c-984e50bf2970) | 2025-07-12 14:27:05.073907 | orchestrator | | key_name | test | 2025-07-12 14:27:05.073915 | orchestrator | | locked | False | 2025-07-12 14:27:05.073928 | orchestrator | | locked_reason | None | 2025-07-12 14:27:05.073936 | orchestrator | | name | test-4 | 2025-07-12 14:27:05.073949 | orchestrator | | pinned_availability_zone | None | 2025-07-12 14:27:05.073990 | orchestrator | | progress | 0 | 2025-07-12 14:27:05.074000 | orchestrator | | project_id | 8b4bc4ea7dd741e2bcbe8872d56604fd | 2025-07-12 14:27:05.074008 | orchestrator | | properties | hostname='test-4' | 2025-07-12 14:27:05.074060 | orchestrator | | security_groups | name='icmp' | 2025-07-12 14:27:05.074073 | orchestrator | | | name='ssh' | 2025-07-12 14:27:05.074082 | orchestrator | | server_groups | None | 2025-07-12 14:27:05.074090 | orchestrator | | status | ACTIVE | 2025-07-12 14:27:05.074098 | orchestrator | | tags | test | 2025-07-12 14:27:05.074112 | orchestrator | | trusted_image_certificates | None | 2025-07-12 14:27:05.074121 | orchestrator | | updated | 2025-07-12T14:25:43Z | 2025-07-12 14:27:05.074134 | orchestrator | | user_id | 1b68035535054e038efc8918d8d4ecad | 2025-07-12 14:27:05.074143 | orchestrator | | volumes_attached | | 2025-07-12 14:27:05.078107 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 14:27:05.399375 | orchestrator | + server_ping 2025-07-12 14:27:05.400953 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 14:27:05.401556 | orchestrator | ++ tr -d '\r' 2025-07-12 14:27:08.282713 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:27:08.282789 | orchestrator | + ping -c3 192.168.112.184 2025-07-12 14:27:08.297542 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-07-12 14:27:08.297578 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=7.30 ms 2025-07-12 14:27:09.294284 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.07 ms 2025-07-12 14:27:10.295892 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.47 ms 2025-07-12 14:27:10.296049 | orchestrator | 2025-07-12 14:27:10.296067 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-07-12 14:27:10.296078 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-12 14:27:10.296088 | orchestrator | rtt min/avg/max/mdev = 1.467/3.614/7.302/2.619 ms 2025-07-12 14:27:10.296110 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:27:10.296121 | orchestrator | + ping -c3 192.168.112.159 2025-07-12 14:27:10.306807 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2025-07-12 14:27:10.306899 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=5.77 ms 2025-07-12 14:27:11.304539 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.19 ms 2025-07-12 14:27:12.306192 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=2.04 ms 2025-07-12 14:27:12.306268 | orchestrator | 2025-07-12 14:27:12.306278 | orchestrator | --- 192.168.112.159 ping statistics --- 2025-07-12 14:27:12.306286 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:27:12.306293 | orchestrator | rtt min/avg/max/mdev = 2.038/3.333/5.768/1.722 ms 2025-07-12 14:27:12.306806 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:27:12.306841 | orchestrator | + ping -c3 192.168.112.141 2025-07-12 14:27:12.321187 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-07-12 14:27:12.321228 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=9.22 ms 2025-07-12 14:27:13.316670 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.64 ms 2025-07-12 14:27:14.319424 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=1.97 ms 2025-07-12 14:27:14.319530 | orchestrator | 2025-07-12 14:27:14.319546 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-07-12 14:27:14.319559 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-12 14:27:14.319570 | orchestrator | rtt min/avg/max/mdev = 1.966/4.611/9.224/3.273 ms 2025-07-12 14:27:14.319836 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:27:14.319860 | orchestrator | + ping -c3 192.168.112.105 2025-07-12 14:27:14.329425 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-12 14:27:14.329475 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=7.28 ms 2025-07-12 14:27:15.325780 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=1.81 ms 2025-07-12 14:27:16.328197 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.85 ms 2025-07-12 14:27:16.328312 | orchestrator | 2025-07-12 14:27:16.328329 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-12 14:27:16.328342 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 14:27:16.328353 | orchestrator | rtt min/avg/max/mdev = 1.810/3.645/7.281/2.570 ms 2025-07-12 14:27:16.328856 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:27:16.328882 | orchestrator | + ping -c3 192.168.112.120 2025-07-12 14:27:16.339664 | orchestrator | PING 192.168.112.120 (192.168.112.120) 56(84) bytes of data. 2025-07-12 14:27:16.339708 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=1 ttl=63 time=7.20 ms 2025-07-12 14:27:17.336606 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=2 ttl=63 time=2.50 ms 2025-07-12 14:27:18.338816 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=3 ttl=63 time=2.26 ms 2025-07-12 14:27:18.338914 | orchestrator | 2025-07-12 14:27:18.338929 | orchestrator | --- 192.168.112.120 ping statistics --- 2025-07-12 14:27:18.338942 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:27:18.338954 | orchestrator | rtt min/avg/max/mdev = 2.264/3.987/7.203/2.275 ms 2025-07-12 14:27:18.339010 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 14:27:18.339023 | orchestrator | + compute_list 2025-07-12 14:27:18.339035 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 14:27:21.590782 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:27:21.590905 | orchestrator | | ID | Name | Status | 2025-07-12 14:27:21.590921 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 14:27:21.590933 | orchestrator | | 27e4ee80-e0e1-49ed-8288-298fc5410447 | test-4 | ACTIVE | 2025-07-12 14:27:21.590944 | orchestrator | | 13302c0d-c25c-4b96-9be6-85da1a96ef57 | test | ACTIVE | 2025-07-12 14:27:21.590955 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:27:21.903550 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 14:27:25.080347 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:27:25.080447 | orchestrator | | ID | Name | Status | 2025-07-12 14:27:25.080460 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 14:27:25.080472 | orchestrator | | 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 | test-2 | ACTIVE | 2025-07-12 14:27:25.080483 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:27:25.373456 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 14:27:28.495132 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:27:28.495201 | orchestrator | | ID | Name | Status | 2025-07-12 14:27:28.495208 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 14:27:28.495213 | orchestrator | | e17d4f7b-4bc6-401f-a5df-0ca71650af27 | test-3 | ACTIVE | 2025-07-12 14:27:28.495218 | orchestrator | | 8a9fa1eb-0f17-4fa9-be08-066123879c47 | test-1 | ACTIVE | 2025-07-12 14:27:28.495238 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:27:28.767688 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-07-12 14:27:31.825817 | orchestrator | 2025-07-12 14:27:31 | INFO  | Live migrating server 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 2025-07-12 14:27:45.540558 | orchestrator | 2025-07-12 14:27:45 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:27:48.290972 | orchestrator | 2025-07-12 14:27:48 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:27:50.710695 | orchestrator | 2025-07-12 14:27:50 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:27:53.131859 | orchestrator | 2025-07-12 14:27:53 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:27:55.384586 | orchestrator | 2025-07-12 14:27:55 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:27:57.999233 | orchestrator | 2025-07-12 14:27:57 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:28:00.276235 | orchestrator | 2025-07-12 14:28:00 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:28:02.729814 | orchestrator | 2025-07-12 14:28:02 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) completed with status ACTIVE 2025-07-12 14:28:03.034424 | orchestrator | + compute_list 2025-07-12 14:28:03.034537 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 14:28:06.325272 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:28:06.325379 | orchestrator | | ID | Name | Status | 2025-07-12 14:28:06.325394 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 14:28:06.325406 | orchestrator | | 27e4ee80-e0e1-49ed-8288-298fc5410447 | test-4 | ACTIVE | 2025-07-12 14:28:06.325417 | orchestrator | | 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 | test-2 | ACTIVE | 2025-07-12 14:28:06.325428 | orchestrator | | 13302c0d-c25c-4b96-9be6-85da1a96ef57 | test | ACTIVE | 2025-07-12 14:28:06.325438 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:28:06.644864 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 14:28:09.670890 | orchestrator | +------+--------+----------+ 2025-07-12 14:28:09.671068 | orchestrator | | ID | Name | Status | 2025-07-12 14:28:09.671085 | orchestrator | |------+--------+----------| 2025-07-12 14:28:09.671097 | orchestrator | +------+--------+----------+ 2025-07-12 14:28:10.009740 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 14:28:13.095925 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:28:13.096054 | orchestrator | | ID | Name | Status | 2025-07-12 14:28:13.096069 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 14:28:13.096081 | orchestrator | | e17d4f7b-4bc6-401f-a5df-0ca71650af27 | test-3 | ACTIVE | 2025-07-12 14:28:13.096092 | orchestrator | | 8a9fa1eb-0f17-4fa9-be08-066123879c47 | test-1 | ACTIVE | 2025-07-12 14:28:13.096103 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:28:13.476119 | orchestrator | + server_ping 2025-07-12 14:28:13.477416 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 14:28:13.477453 | orchestrator | ++ tr -d '\r' 2025-07-12 14:28:16.543446 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:28:16.543569 | orchestrator | + ping -c3 192.168.112.184 2025-07-12 14:28:16.555349 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-07-12 14:28:16.555432 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=8.53 ms 2025-07-12 14:28:17.551238 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.55 ms 2025-07-12 14:28:18.553587 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=2.01 ms 2025-07-12 14:28:18.553697 | orchestrator | 2025-07-12 14:28:18.553713 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-07-12 14:28:18.553727 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 14:28:18.553738 | orchestrator | rtt min/avg/max/mdev = 2.013/4.363/8.525/2.950 ms 2025-07-12 14:28:18.554209 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:28:18.554238 | orchestrator | + ping -c3 192.168.112.159 2025-07-12 14:28:18.564250 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2025-07-12 14:28:18.564348 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=5.93 ms 2025-07-12 14:28:19.561508 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.74 ms 2025-07-12 14:28:20.563488 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=1.76 ms 2025-07-12 14:28:20.563617 | orchestrator | 2025-07-12 14:28:20.563635 | orchestrator | --- 192.168.112.159 ping statistics --- 2025-07-12 14:28:20.564430 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:28:20.564453 | orchestrator | rtt min/avg/max/mdev = 1.755/3.475/5.929/1.781 ms 2025-07-12 14:28:20.564539 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:28:20.564555 | orchestrator | + ping -c3 192.168.112.141 2025-07-12 14:28:20.577664 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-07-12 14:28:20.577702 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=8.43 ms 2025-07-12 14:28:21.573811 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.61 ms 2025-07-12 14:28:22.575358 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.04 ms 2025-07-12 14:28:22.575470 | orchestrator | 2025-07-12 14:28:22.575488 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-07-12 14:28:22.575501 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 14:28:22.575513 | orchestrator | rtt min/avg/max/mdev = 2.035/4.359/8.430/2.887 ms 2025-07-12 14:28:22.575993 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:28:22.576050 | orchestrator | + ping -c3 192.168.112.105 2025-07-12 14:28:22.588125 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-12 14:28:22.588197 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=6.33 ms 2025-07-12 14:28:23.585904 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.33 ms 2025-07-12 14:28:24.587298 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.80 ms 2025-07-12 14:28:24.587407 | orchestrator | 2025-07-12 14:28:24.587424 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-12 14:28:24.587437 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:28:24.588057 | orchestrator | rtt min/avg/max/mdev = 1.803/3.486/6.332/2.023 ms 2025-07-12 14:28:24.588107 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:28:24.588120 | orchestrator | + ping -c3 192.168.112.120 2025-07-12 14:28:24.597587 | orchestrator | PING 192.168.112.120 (192.168.112.120) 56(84) bytes of data. 2025-07-12 14:28:24.597627 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=1 ttl=63 time=5.96 ms 2025-07-12 14:28:25.595944 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=2 ttl=63 time=2.95 ms 2025-07-12 14:28:26.596711 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=3 ttl=63 time=1.81 ms 2025-07-12 14:28:26.596815 | orchestrator | 2025-07-12 14:28:26.596831 | orchestrator | --- 192.168.112.120 ping statistics --- 2025-07-12 14:28:26.596844 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:28:26.596856 | orchestrator | rtt min/avg/max/mdev = 1.808/3.573/5.963/1.752 ms 2025-07-12 14:28:26.597034 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-07-12 14:28:29.849103 | orchestrator | 2025-07-12 14:28:29 | INFO  | Live migrating server e17d4f7b-4bc6-401f-a5df-0ca71650af27 2025-07-12 14:28:43.072921 | orchestrator | 2025-07-12 14:28:43 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:28:45.476260 | orchestrator | 2025-07-12 14:28:45 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:28:47.783523 | orchestrator | 2025-07-12 14:28:47 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:28:50.119363 | orchestrator | 2025-07-12 14:28:50 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:28:52.373459 | orchestrator | 2025-07-12 14:28:52 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:28:54.711529 | orchestrator | 2025-07-12 14:28:54 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:28:57.213182 | orchestrator | 2025-07-12 14:28:57 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:28:59.522241 | orchestrator | 2025-07-12 14:28:59 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) completed with status ACTIVE 2025-07-12 14:28:59.522351 | orchestrator | 2025-07-12 14:28:59 | INFO  | Live migrating server 8a9fa1eb-0f17-4fa9-be08-066123879c47 2025-07-12 14:29:11.914934 | orchestrator | 2025-07-12 14:29:11 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:29:14.217187 | orchestrator | 2025-07-12 14:29:14 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:29:16.590375 | orchestrator | 2025-07-12 14:29:16 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:29:18.982669 | orchestrator | 2025-07-12 14:29:18 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:29:21.242631 | orchestrator | 2025-07-12 14:29:21 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:29:23.577271 | orchestrator | 2025-07-12 14:29:23 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:29:25.962724 | orchestrator | 2025-07-12 14:29:25 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:29:28.242164 | orchestrator | 2025-07-12 14:29:28 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) completed with status ACTIVE 2025-07-12 14:29:28.617506 | orchestrator | + compute_list 2025-07-12 14:29:28.617600 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 14:29:31.843847 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:29:31.843963 | orchestrator | | ID | Name | Status | 2025-07-12 14:29:31.843997 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 14:29:31.844010 | orchestrator | | 27e4ee80-e0e1-49ed-8288-298fc5410447 | test-4 | ACTIVE | 2025-07-12 14:29:31.844021 | orchestrator | | e17d4f7b-4bc6-401f-a5df-0ca71650af27 | test-3 | ACTIVE | 2025-07-12 14:29:31.844032 | orchestrator | | 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 | test-2 | ACTIVE | 2025-07-12 14:29:31.844101 | orchestrator | | 8a9fa1eb-0f17-4fa9-be08-066123879c47 | test-1 | ACTIVE | 2025-07-12 14:29:31.844112 | orchestrator | | 13302c0d-c25c-4b96-9be6-85da1a96ef57 | test | ACTIVE | 2025-07-12 14:29:31.844124 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:29:32.161670 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 14:29:34.821759 | orchestrator | +------+--------+----------+ 2025-07-12 14:29:34.821865 | orchestrator | | ID | Name | Status | 2025-07-12 14:29:34.821879 | orchestrator | |------+--------+----------| 2025-07-12 14:29:34.821891 | orchestrator | +------+--------+----------+ 2025-07-12 14:29:35.130801 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 14:29:37.997625 | orchestrator | +------+--------+----------+ 2025-07-12 14:29:37.997742 | orchestrator | | ID | Name | Status | 2025-07-12 14:29:37.997758 | orchestrator | |------+--------+----------| 2025-07-12 14:29:37.997771 | orchestrator | +------+--------+----------+ 2025-07-12 14:29:38.409239 | orchestrator | + server_ping 2025-07-12 14:29:38.410658 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 14:29:38.410692 | orchestrator | ++ tr -d '\r' 2025-07-12 14:29:41.361992 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:29:41.362173 | orchestrator | + ping -c3 192.168.112.184 2025-07-12 14:29:41.372409 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-07-12 14:29:41.372471 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=8.64 ms 2025-07-12 14:29:42.368208 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.32 ms 2025-07-12 14:29:43.369886 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.95 ms 2025-07-12 14:29:43.369993 | orchestrator | 2025-07-12 14:29:43.370009 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-07-12 14:29:43.370096 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:29:43.370108 | orchestrator | rtt min/avg/max/mdev = 1.945/4.303/8.641/3.071 ms 2025-07-12 14:29:43.370119 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:29:43.370129 | orchestrator | + ping -c3 192.168.112.159 2025-07-12 14:29:43.379007 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2025-07-12 14:29:43.379070 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=5.97 ms 2025-07-12 14:29:44.377166 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.54 ms 2025-07-12 14:29:45.377979 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=1.60 ms 2025-07-12 14:29:45.378182 | orchestrator | 2025-07-12 14:29:45.378199 | orchestrator | --- 192.168.112.159 ping statistics --- 2025-07-12 14:29:45.378211 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:29:45.378223 | orchestrator | rtt min/avg/max/mdev = 1.600/3.371/5.970/1.877 ms 2025-07-12 14:29:45.378387 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:29:45.378402 | orchestrator | + ping -c3 192.168.112.141 2025-07-12 14:29:45.390845 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-07-12 14:29:45.390880 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=7.21 ms 2025-07-12 14:29:46.388640 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=3.06 ms 2025-07-12 14:29:47.389433 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=1.85 ms 2025-07-12 14:29:47.389538 | orchestrator | 2025-07-12 14:29:47.389552 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-07-12 14:29:47.389565 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:29:47.389577 | orchestrator | rtt min/avg/max/mdev = 1.847/4.039/7.214/2.298 ms 2025-07-12 14:29:47.389588 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:29:47.389600 | orchestrator | + ping -c3 192.168.112.105 2025-07-12 14:29:47.401081 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-12 14:29:47.401109 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=7.24 ms 2025-07-12 14:29:48.397833 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.29 ms 2025-07-12 14:29:49.399565 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.81 ms 2025-07-12 14:29:49.399676 | orchestrator | 2025-07-12 14:29:49.399828 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-12 14:29:49.399848 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:29:49.399860 | orchestrator | rtt min/avg/max/mdev = 1.810/3.778/7.239/2.454 ms 2025-07-12 14:29:49.399886 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:29:49.399898 | orchestrator | + ping -c3 192.168.112.120 2025-07-12 14:29:49.411187 | orchestrator | PING 192.168.112.120 (192.168.112.120) 56(84) bytes of data. 2025-07-12 14:29:49.411230 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=1 ttl=63 time=6.44 ms 2025-07-12 14:29:50.408963 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=2 ttl=63 time=2.44 ms 2025-07-12 14:29:51.411229 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=3 ttl=63 time=1.98 ms 2025-07-12 14:29:51.411336 | orchestrator | 2025-07-12 14:29:51.411353 | orchestrator | --- 192.168.112.120 ping statistics --- 2025-07-12 14:29:51.411366 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:29:51.411378 | orchestrator | rtt min/avg/max/mdev = 1.977/3.620/6.440/2.002 ms 2025-07-12 14:29:51.411389 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-07-12 14:29:54.662750 | orchestrator | 2025-07-12 14:29:54 | INFO  | Live migrating server 27e4ee80-e0e1-49ed-8288-298fc5410447 2025-07-12 14:30:05.348595 | orchestrator | 2025-07-12 14:30:05 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:30:07.683919 | orchestrator | 2025-07-12 14:30:07 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:30:10.043740 | orchestrator | 2025-07-12 14:30:10 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:30:12.437178 | orchestrator | 2025-07-12 14:30:12 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:30:14.723955 | orchestrator | 2025-07-12 14:30:14 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:30:17.011576 | orchestrator | 2025-07-12 14:30:17 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:30:19.282318 | orchestrator | 2025-07-12 14:30:19 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) completed with status ACTIVE 2025-07-12 14:30:19.282555 | orchestrator | 2025-07-12 14:30:19 | INFO  | Live migrating server e17d4f7b-4bc6-401f-a5df-0ca71650af27 2025-07-12 14:30:31.457526 | orchestrator | 2025-07-12 14:30:31 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:30:33.803695 | orchestrator | 2025-07-12 14:30:33 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:30:36.128931 | orchestrator | 2025-07-12 14:30:36 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:30:38.370766 | orchestrator | 2025-07-12 14:30:38 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:30:40.624133 | orchestrator | 2025-07-12 14:30:40 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:30:42.959009 | orchestrator | 2025-07-12 14:30:42 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:30:45.340352 | orchestrator | 2025-07-12 14:30:45 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:30:47.981528 | orchestrator | 2025-07-12 14:30:47 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) completed with status ACTIVE 2025-07-12 14:30:47.981640 | orchestrator | 2025-07-12 14:30:47 | INFO  | Live migrating server 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 2025-07-12 14:30:59.664382 | orchestrator | 2025-07-12 14:30:59 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:31:02.066857 | orchestrator | 2025-07-12 14:31:02 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:31:04.633590 | orchestrator | 2025-07-12 14:31:04 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:31:06.887046 | orchestrator | 2025-07-12 14:31:06 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:31:09.241823 | orchestrator | 2025-07-12 14:31:09 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:31:11.592428 | orchestrator | 2025-07-12 14:31:11 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:31:13.935234 | orchestrator | 2025-07-12 14:31:13 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:31:16.202688 | orchestrator | 2025-07-12 14:31:16 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) completed with status ACTIVE 2025-07-12 14:31:16.202797 | orchestrator | 2025-07-12 14:31:16 | INFO  | Live migrating server 8a9fa1eb-0f17-4fa9-be08-066123879c47 2025-07-12 14:31:26.206210 | orchestrator | 2025-07-12 14:31:26 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:31:28.528158 | orchestrator | 2025-07-12 14:31:28 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:31:30.901855 | orchestrator | 2025-07-12 14:31:30 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:31:33.194100 | orchestrator | 2025-07-12 14:31:33 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:31:35.459660 | orchestrator | 2025-07-12 14:31:35 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:31:37.747227 | orchestrator | 2025-07-12 14:31:37 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:31:40.078425 | orchestrator | 2025-07-12 14:31:40 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:31:42.354572 | orchestrator | 2025-07-12 14:31:42 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) completed with status ACTIVE 2025-07-12 14:31:42.354680 | orchestrator | 2025-07-12 14:31:42 | INFO  | Live migrating server 13302c0d-c25c-4b96-9be6-85da1a96ef57 2025-07-12 14:31:52.469515 | orchestrator | 2025-07-12 14:31:52 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:31:54.781915 | orchestrator | 2025-07-12 14:31:54 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:31:57.167333 | orchestrator | 2025-07-12 14:31:57 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:31:59.543746 | orchestrator | 2025-07-12 14:31:59 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:32:01.822937 | orchestrator | 2025-07-12 14:32:01 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:32:04.104504 | orchestrator | 2025-07-12 14:32:04 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:32:06.417870 | orchestrator | 2025-07-12 14:32:06 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:32:08.827358 | orchestrator | 2025-07-12 14:32:08 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:32:11.118631 | orchestrator | 2025-07-12 14:32:11 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:32:13.363297 | orchestrator | 2025-07-12 14:32:13 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) completed with status ACTIVE 2025-07-12 14:32:13.781208 | orchestrator | + compute_list 2025-07-12 14:32:13.781313 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 14:32:16.763484 | orchestrator | +------+--------+----------+ 2025-07-12 14:32:16.763602 | orchestrator | | ID | Name | Status | 2025-07-12 14:32:16.763618 | orchestrator | |------+--------+----------| 2025-07-12 14:32:16.763630 | orchestrator | +------+--------+----------+ 2025-07-12 14:32:17.160158 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 14:32:20.364666 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:32:20.364774 | orchestrator | | ID | Name | Status | 2025-07-12 14:32:20.364787 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 14:32:20.364796 | orchestrator | | 27e4ee80-e0e1-49ed-8288-298fc5410447 | test-4 | ACTIVE | 2025-07-12 14:32:20.364805 | orchestrator | | e17d4f7b-4bc6-401f-a5df-0ca71650af27 | test-3 | ACTIVE | 2025-07-12 14:32:20.364814 | orchestrator | | 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 | test-2 | ACTIVE | 2025-07-12 14:32:20.364823 | orchestrator | | 8a9fa1eb-0f17-4fa9-be08-066123879c47 | test-1 | ACTIVE | 2025-07-12 14:32:20.364832 | orchestrator | | 13302c0d-c25c-4b96-9be6-85da1a96ef57 | test | ACTIVE | 2025-07-12 14:32:20.364841 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:32:20.712805 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 14:32:23.374462 | orchestrator | +------+--------+----------+ 2025-07-12 14:32:23.374579 | orchestrator | | ID | Name | Status | 2025-07-12 14:32:23.374595 | orchestrator | |------+--------+----------| 2025-07-12 14:32:23.374606 | orchestrator | +------+--------+----------+ 2025-07-12 14:32:23.722827 | orchestrator | + server_ping 2025-07-12 14:32:23.723317 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 14:32:23.723483 | orchestrator | ++ tr -d '\r' 2025-07-12 14:32:26.906264 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:32:26.906383 | orchestrator | + ping -c3 192.168.112.184 2025-07-12 14:32:26.920733 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-07-12 14:32:26.920818 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=10.8 ms 2025-07-12 14:32:27.914826 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.47 ms 2025-07-12 14:32:28.916394 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.90 ms 2025-07-12 14:32:28.916555 | orchestrator | 2025-07-12 14:32:28.916575 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-07-12 14:32:28.916589 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 14:32:28.916600 | orchestrator | rtt min/avg/max/mdev = 1.897/5.042/10.763/4.051 ms 2025-07-12 14:32:28.916967 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:32:28.917005 | orchestrator | + ping -c3 192.168.112.159 2025-07-12 14:32:28.932387 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2025-07-12 14:32:28.932430 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=10.3 ms 2025-07-12 14:32:29.926593 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.55 ms 2025-07-12 14:32:30.927177 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=1.77 ms 2025-07-12 14:32:30.927279 | orchestrator | 2025-07-12 14:32:30.927294 | orchestrator | --- 192.168.112.159 ping statistics --- 2025-07-12 14:32:30.927307 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:32:30.927339 | orchestrator | rtt min/avg/max/mdev = 1.769/4.875/10.304/3.851 ms 2025-07-12 14:32:30.927736 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:32:30.927760 | orchestrator | + ping -c3 192.168.112.141 2025-07-12 14:32:30.941353 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-07-12 14:32:30.941455 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=9.03 ms 2025-07-12 14:32:31.936766 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.87 ms 2025-07-12 14:32:32.936464 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.09 ms 2025-07-12 14:32:32.936575 | orchestrator | 2025-07-12 14:32:32.936590 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-07-12 14:32:32.936604 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-12 14:32:32.936616 | orchestrator | rtt min/avg/max/mdev = 2.091/4.662/9.030/3.104 ms 2025-07-12 14:32:32.936945 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:32:32.936970 | orchestrator | + ping -c3 192.168.112.105 2025-07-12 14:32:32.945929 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-12 14:32:32.946082 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=6.06 ms 2025-07-12 14:32:33.943920 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.56 ms 2025-07-12 14:32:34.945424 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.86 ms 2025-07-12 14:32:34.945858 | orchestrator | 2025-07-12 14:32:34.945892 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-12 14:32:34.945906 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:32:34.945918 | orchestrator | rtt min/avg/max/mdev = 1.862/3.491/6.056/1.835 ms 2025-07-12 14:32:34.945944 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:32:34.945957 | orchestrator | + ping -c3 192.168.112.120 2025-07-12 14:32:34.956482 | orchestrator | PING 192.168.112.120 (192.168.112.120) 56(84) bytes of data. 2025-07-12 14:32:34.956531 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=1 ttl=63 time=6.22 ms 2025-07-12 14:32:35.954376 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=2 ttl=63 time=2.31 ms 2025-07-12 14:32:36.956533 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=3 ttl=63 time=1.96 ms 2025-07-12 14:32:36.956640 | orchestrator | 2025-07-12 14:32:36.956657 | orchestrator | --- 192.168.112.120 ping statistics --- 2025-07-12 14:32:36.956670 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:32:36.956681 | orchestrator | rtt min/avg/max/mdev = 1.963/3.496/6.218/1.929 ms 2025-07-12 14:32:36.956693 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-07-12 14:32:40.140337 | orchestrator | 2025-07-12 14:32:40 | INFO  | Live migrating server 27e4ee80-e0e1-49ed-8288-298fc5410447 2025-07-12 14:32:51.686942 | orchestrator | 2025-07-12 14:32:51 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:32:54.031070 | orchestrator | 2025-07-12 14:32:54 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:32:56.374302 | orchestrator | 2025-07-12 14:32:56 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:32:58.879383 | orchestrator | 2025-07-12 14:32:58 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:33:01.173610 | orchestrator | 2025-07-12 14:33:01 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:33:03.450255 | orchestrator | 2025-07-12 14:33:03 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) is still in progress 2025-07-12 14:33:05.690332 | orchestrator | 2025-07-12 14:33:05 | INFO  | Live migration of 27e4ee80-e0e1-49ed-8288-298fc5410447 (test-4) completed with status ACTIVE 2025-07-12 14:33:05.690451 | orchestrator | 2025-07-12 14:33:05 | INFO  | Live migrating server e17d4f7b-4bc6-401f-a5df-0ca71650af27 2025-07-12 14:33:15.477183 | orchestrator | 2025-07-12 14:33:15 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:33:17.823326 | orchestrator | 2025-07-12 14:33:17 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:33:20.151776 | orchestrator | 2025-07-12 14:33:20 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:33:22.426658 | orchestrator | 2025-07-12 14:33:22 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:33:24.687158 | orchestrator | 2025-07-12 14:33:24 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:33:26.969631 | orchestrator | 2025-07-12 14:33:26 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:33:29.255102 | orchestrator | 2025-07-12 14:33:29 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:33:31.566847 | orchestrator | 2025-07-12 14:33:31 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) is still in progress 2025-07-12 14:33:33.921884 | orchestrator | 2025-07-12 14:33:33 | INFO  | Live migration of e17d4f7b-4bc6-401f-a5df-0ca71650af27 (test-3) completed with status ACTIVE 2025-07-12 14:33:33.922075 | orchestrator | 2025-07-12 14:33:33 | INFO  | Live migrating server 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 2025-07-12 14:33:43.918264 | orchestrator | 2025-07-12 14:33:43 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:33:46.259347 | orchestrator | 2025-07-12 14:33:46 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:33:48.609577 | orchestrator | 2025-07-12 14:33:48 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:33:51.118902 | orchestrator | 2025-07-12 14:33:51 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:33:53.404870 | orchestrator | 2025-07-12 14:33:53 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:33:55.749688 | orchestrator | 2025-07-12 14:33:55 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:33:58.039017 | orchestrator | 2025-07-12 14:33:58 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) is still in progress 2025-07-12 14:34:00.362488 | orchestrator | 2025-07-12 14:34:00 | INFO  | Live migration of 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 (test-2) completed with status ACTIVE 2025-07-12 14:34:00.362577 | orchestrator | 2025-07-12 14:34:00 | INFO  | Live migrating server 8a9fa1eb-0f17-4fa9-be08-066123879c47 2025-07-12 14:34:09.990820 | orchestrator | 2025-07-12 14:34:09 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:34:12.334550 | orchestrator | 2025-07-12 14:34:12 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:34:14.705105 | orchestrator | 2025-07-12 14:34:14 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:34:16.931863 | orchestrator | 2025-07-12 14:34:16 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:34:19.212019 | orchestrator | 2025-07-12 14:34:19 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:34:21.499393 | orchestrator | 2025-07-12 14:34:21 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:34:23.777802 | orchestrator | 2025-07-12 14:34:23 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) is still in progress 2025-07-12 14:34:26.094839 | orchestrator | 2025-07-12 14:34:26 | INFO  | Live migration of 8a9fa1eb-0f17-4fa9-be08-066123879c47 (test-1) completed with status ACTIVE 2025-07-12 14:34:26.094924 | orchestrator | 2025-07-12 14:34:26 | INFO  | Live migrating server 13302c0d-c25c-4b96-9be6-85da1a96ef57 2025-07-12 14:34:36.530669 | orchestrator | 2025-07-12 14:34:36 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:34:38.897614 | orchestrator | 2025-07-12 14:34:38 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:34:41.397777 | orchestrator | 2025-07-12 14:34:41 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:34:43.672447 | orchestrator | 2025-07-12 14:34:43 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:34:46.051982 | orchestrator | 2025-07-12 14:34:46 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:34:48.420373 | orchestrator | 2025-07-12 14:34:48 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:34:50.713035 | orchestrator | 2025-07-12 14:34:50 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:34:52.970204 | orchestrator | 2025-07-12 14:34:52 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:34:55.237486 | orchestrator | 2025-07-12 14:34:55 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) is still in progress 2025-07-12 14:34:57.544913 | orchestrator | 2025-07-12 14:34:57 | INFO  | Live migration of 13302c0d-c25c-4b96-9be6-85da1a96ef57 (test) completed with status ACTIVE 2025-07-12 14:34:57.901227 | orchestrator | + compute_list 2025-07-12 14:34:57.901328 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 14:35:00.645723 | orchestrator | +------+--------+----------+ 2025-07-12 14:35:00.645839 | orchestrator | | ID | Name | Status | 2025-07-12 14:35:00.645854 | orchestrator | |------+--------+----------| 2025-07-12 14:35:00.645865 | orchestrator | +------+--------+----------+ 2025-07-12 14:35:01.072171 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 14:35:03.681196 | orchestrator | +------+--------+----------+ 2025-07-12 14:35:03.681311 | orchestrator | | ID | Name | Status | 2025-07-12 14:35:03.681325 | orchestrator | |------+--------+----------| 2025-07-12 14:35:03.681337 | orchestrator | +------+--------+----------+ 2025-07-12 14:35:04.004711 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 14:35:07.353633 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:35:07.353746 | orchestrator | | ID | Name | Status | 2025-07-12 14:35:07.353762 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 14:35:07.353774 | orchestrator | | 27e4ee80-e0e1-49ed-8288-298fc5410447 | test-4 | ACTIVE | 2025-07-12 14:35:07.353784 | orchestrator | | e17d4f7b-4bc6-401f-a5df-0ca71650af27 | test-3 | ACTIVE | 2025-07-12 14:35:07.353796 | orchestrator | | 9d258b9d-f1a4-4d3c-8b19-4c976c7f7201 | test-2 | ACTIVE | 2025-07-12 14:35:07.353807 | orchestrator | | 8a9fa1eb-0f17-4fa9-be08-066123879c47 | test-1 | ACTIVE | 2025-07-12 14:35:07.353818 | orchestrator | | 13302c0d-c25c-4b96-9be6-85da1a96ef57 | test | ACTIVE | 2025-07-12 14:35:07.353829 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 14:35:07.721396 | orchestrator | + server_ping 2025-07-12 14:35:07.722807 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 14:35:07.723387 | orchestrator | ++ tr -d '\r' 2025-07-12 14:35:10.651277 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:35:10.651385 | orchestrator | + ping -c3 192.168.112.184 2025-07-12 14:35:10.664895 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-07-12 14:35:10.664954 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=10.4 ms 2025-07-12 14:35:11.659083 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.44 ms 2025-07-12 14:35:12.661039 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=2.16 ms 2025-07-12 14:35:12.661177 | orchestrator | 2025-07-12 14:35:12.661196 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-07-12 14:35:12.661208 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 14:35:12.661219 | orchestrator | rtt min/avg/max/mdev = 2.156/4.991/10.379/3.811 ms 2025-07-12 14:35:12.661903 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:35:12.661949 | orchestrator | + ping -c3 192.168.112.159 2025-07-12 14:35:12.672903 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2025-07-12 14:35:12.672951 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=5.97 ms 2025-07-12 14:35:13.671022 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.58 ms 2025-07-12 14:35:14.673089 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=1.80 ms 2025-07-12 14:35:14.673252 | orchestrator | 2025-07-12 14:35:14.673282 | orchestrator | --- 192.168.112.159 ping statistics --- 2025-07-12 14:35:14.673303 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:35:14.673323 | orchestrator | rtt min/avg/max/mdev = 1.804/3.452/5.974/1.810 ms 2025-07-12 14:35:14.674101 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:35:14.674180 | orchestrator | + ping -c3 192.168.112.141 2025-07-12 14:35:14.682279 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-07-12 14:35:14.682359 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=5.49 ms 2025-07-12 14:35:15.680629 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.47 ms 2025-07-12 14:35:16.682994 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.00 ms 2025-07-12 14:35:16.683101 | orchestrator | 2025-07-12 14:35:16.683174 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-07-12 14:35:16.683189 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:35:16.683200 | orchestrator | rtt min/avg/max/mdev = 2.004/3.321/5.489/1.544 ms 2025-07-12 14:35:16.683915 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:35:16.684011 | orchestrator | + ping -c3 192.168.112.105 2025-07-12 14:35:16.694308 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-12 14:35:16.694367 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=6.01 ms 2025-07-12 14:35:17.692285 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.34 ms 2025-07-12 14:35:18.694362 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=2.02 ms 2025-07-12 14:35:18.694479 | orchestrator | 2025-07-12 14:35:18.694495 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-12 14:35:18.694507 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:35:18.694519 | orchestrator | rtt min/avg/max/mdev = 2.022/3.457/6.008/1.808 ms 2025-07-12 14:35:18.694846 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 14:35:18.694869 | orchestrator | + ping -c3 192.168.112.120 2025-07-12 14:35:18.707894 | orchestrator | PING 192.168.112.120 (192.168.112.120) 56(84) bytes of data. 2025-07-12 14:35:18.707939 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=1 ttl=63 time=8.36 ms 2025-07-12 14:35:19.704913 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=2 ttl=63 time=3.26 ms 2025-07-12 14:35:20.706264 | orchestrator | 64 bytes from 192.168.112.120: icmp_seq=3 ttl=63 time=2.27 ms 2025-07-12 14:35:20.706367 | orchestrator | 2025-07-12 14:35:20.706383 | orchestrator | --- 192.168.112.120 ping statistics --- 2025-07-12 14:35:20.706396 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 14:35:20.706407 | orchestrator | rtt min/avg/max/mdev = 2.269/4.632/8.364/2.669 ms 2025-07-12 14:35:20.810641 | orchestrator | ok: Runtime: 0:19:19.468277 2025-07-12 14:35:20.853206 | 2025-07-12 14:35:20.853342 | TASK [Run tempest] 2025-07-12 14:35:21.387922 | orchestrator | skipping: Conditional result was False 2025-07-12 14:35:21.404597 | 2025-07-12 14:35:21.404788 | TASK [Check prometheus alert status] 2025-07-12 14:35:21.940567 | orchestrator | skipping: Conditional result was False 2025-07-12 14:35:21.943664 | 2025-07-12 14:35:21.943872 | PLAY RECAP 2025-07-12 14:35:21.944019 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-07-12 14:35:21.944089 | 2025-07-12 14:35:22.183859 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-12 14:35:22.184888 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 14:35:22.938106 | 2025-07-12 14:35:22.938978 | PLAY [Post output play] 2025-07-12 14:35:22.955924 | 2025-07-12 14:35:22.956071 | LOOP [stage-output : Register sources] 2025-07-12 14:35:23.027474 | 2025-07-12 14:35:23.027825 | TASK [stage-output : Check sudo] 2025-07-12 14:35:23.848701 | orchestrator | sudo: a password is required 2025-07-12 14:35:24.068582 | orchestrator | ok: Runtime: 0:00:00.016258 2025-07-12 14:35:24.084588 | 2025-07-12 14:35:24.084808 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-12 14:35:24.126004 | 2025-07-12 14:35:24.126322 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-12 14:35:24.206394 | orchestrator | ok 2025-07-12 14:35:24.216045 | 2025-07-12 14:35:24.216219 | LOOP [stage-output : Ensure target folders exist] 2025-07-12 14:35:24.669119 | orchestrator | ok: "docs" 2025-07-12 14:35:24.669483 | 2025-07-12 14:35:24.901481 | orchestrator | ok: "artifacts" 2025-07-12 14:35:25.131640 | orchestrator | ok: "logs" 2025-07-12 14:35:25.148125 | 2025-07-12 14:35:25.148292 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-12 14:35:25.185477 | 2025-07-12 14:35:25.185814 | TASK [stage-output : Make all log files readable] 2025-07-12 14:35:25.486625 | orchestrator | ok 2025-07-12 14:35:25.497379 | 2025-07-12 14:35:25.497565 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-12 14:35:25.533831 | orchestrator | skipping: Conditional result was False 2025-07-12 14:35:25.549115 | 2025-07-12 14:35:25.549309 | TASK [stage-output : Discover log files for compression] 2025-07-12 14:35:25.574821 | orchestrator | skipping: Conditional result was False 2025-07-12 14:35:25.589250 | 2025-07-12 14:35:25.589420 | LOOP [stage-output : Archive everything from logs] 2025-07-12 14:35:25.635602 | 2025-07-12 14:35:25.635880 | PLAY [Post cleanup play] 2025-07-12 14:35:25.644766 | 2025-07-12 14:35:25.644910 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 14:35:25.717600 | orchestrator | ok 2025-07-12 14:35:25.726825 | 2025-07-12 14:35:25.727000 | TASK [Set cloud fact (local deployment)] 2025-07-12 14:35:25.771980 | orchestrator | skipping: Conditional result was False 2025-07-12 14:35:25.785572 | 2025-07-12 14:35:25.785731 | TASK [Clean the cloud environment] 2025-07-12 14:35:26.418525 | orchestrator | 2025-07-12 14:35:26 - clean up servers 2025-07-12 14:35:27.149817 | orchestrator | 2025-07-12 14:35:27 - testbed-manager 2025-07-12 14:35:27.236464 | orchestrator | 2025-07-12 14:35:27 - testbed-node-3 2025-07-12 14:35:27.324137 | orchestrator | 2025-07-12 14:35:27 - testbed-node-4 2025-07-12 14:35:27.414506 | orchestrator | 2025-07-12 14:35:27 - testbed-node-5 2025-07-12 14:35:27.506462 | orchestrator | 2025-07-12 14:35:27 - testbed-node-0 2025-07-12 14:35:27.600173 | orchestrator | 2025-07-12 14:35:27 - testbed-node-1 2025-07-12 14:35:27.685652 | orchestrator | 2025-07-12 14:35:27 - testbed-node-2 2025-07-12 14:35:27.774382 | orchestrator | 2025-07-12 14:35:27 - clean up keypairs 2025-07-12 14:35:27.797129 | orchestrator | 2025-07-12 14:35:27 - testbed 2025-07-12 14:35:27.829748 | orchestrator | 2025-07-12 14:35:27 - wait for servers to be gone 2025-07-12 14:35:38.629221 | orchestrator | 2025-07-12 14:35:38 - clean up ports 2025-07-12 14:35:38.815052 | orchestrator | 2025-07-12 14:35:38 - 26732a5a-b00f-40a4-b8aa-2a61b883f84c 2025-07-12 14:35:39.093660 | orchestrator | 2025-07-12 14:35:39 - 6376d7b2-a024-478a-a60e-a0bcdd4a4766 2025-07-12 14:35:39.332834 | orchestrator | 2025-07-12 14:35:39 - 66096000-683b-400a-8bf4-8a424ae84d9c 2025-07-12 14:35:39.766151 | orchestrator | 2025-07-12 14:35:39 - 7c09d0be-de8f-4656-ab31-5618ba8237d0 2025-07-12 14:35:39.966865 | orchestrator | 2025-07-12 14:35:39 - a8f39262-6412-4074-909a-27644e9771d3 2025-07-12 14:35:40.169944 | orchestrator | 2025-07-12 14:35:40 - d681085c-61af-42cd-a7b6-12e7a5e5f74c 2025-07-12 14:35:40.385090 | orchestrator | 2025-07-12 14:35:40 - f26be7db-5503-4a2e-a753-d53e2ca9e863 2025-07-12 14:35:40.588639 | orchestrator | 2025-07-12 14:35:40 - clean up volumes 2025-07-12 14:35:40.716484 | orchestrator | 2025-07-12 14:35:40 - testbed-volume-2-node-base 2025-07-12 14:35:40.755802 | orchestrator | 2025-07-12 14:35:40 - testbed-volume-0-node-base 2025-07-12 14:35:40.796738 | orchestrator | 2025-07-12 14:35:40 - testbed-volume-5-node-base 2025-07-12 14:35:40.845749 | orchestrator | 2025-07-12 14:35:40 - testbed-volume-3-node-base 2025-07-12 14:35:40.892461 | orchestrator | 2025-07-12 14:35:40 - testbed-volume-1-node-base 2025-07-12 14:35:40.945627 | orchestrator | 2025-07-12 14:35:40 - testbed-volume-4-node-base 2025-07-12 14:35:40.988533 | orchestrator | 2025-07-12 14:35:40 - testbed-volume-manager-base 2025-07-12 14:35:41.031762 | orchestrator | 2025-07-12 14:35:41 - testbed-volume-1-node-4 2025-07-12 14:35:41.074085 | orchestrator | 2025-07-12 14:35:41 - testbed-volume-2-node-5 2025-07-12 14:35:41.118224 | orchestrator | 2025-07-12 14:35:41 - testbed-volume-6-node-3 2025-07-12 14:35:41.160603 | orchestrator | 2025-07-12 14:35:41 - testbed-volume-0-node-3 2025-07-12 14:35:41.201802 | orchestrator | 2025-07-12 14:35:41 - testbed-volume-5-node-5 2025-07-12 14:35:41.242684 | orchestrator | 2025-07-12 14:35:41 - testbed-volume-3-node-3 2025-07-12 14:35:41.282097 | orchestrator | 2025-07-12 14:35:41 - testbed-volume-7-node-4 2025-07-12 14:35:41.323580 | orchestrator | 2025-07-12 14:35:41 - testbed-volume-8-node-5 2025-07-12 14:35:41.366215 | orchestrator | 2025-07-12 14:35:41 - testbed-volume-4-node-4 2025-07-12 14:35:41.407076 | orchestrator | 2025-07-12 14:35:41 - disconnect routers 2025-07-12 14:35:41.513228 | orchestrator | 2025-07-12 14:35:41 - testbed 2025-07-12 14:35:42.436403 | orchestrator | 2025-07-12 14:35:42 - clean up subnets 2025-07-12 14:35:42.493562 | orchestrator | 2025-07-12 14:35:42 - subnet-testbed-management 2025-07-12 14:35:42.657530 | orchestrator | 2025-07-12 14:35:42 - clean up networks 2025-07-12 14:35:42.802873 | orchestrator | 2025-07-12 14:35:42 - net-testbed-management 2025-07-12 14:35:43.102841 | orchestrator | 2025-07-12 14:35:43 - clean up security groups 2025-07-12 14:35:43.150655 | orchestrator | 2025-07-12 14:35:43 - testbed-node 2025-07-12 14:35:43.270887 | orchestrator | 2025-07-12 14:35:43 - testbed-management 2025-07-12 14:35:43.389495 | orchestrator | 2025-07-12 14:35:43 - clean up floating ips 2025-07-12 14:35:43.425684 | orchestrator | 2025-07-12 14:35:43 - 81.163.193.5 2025-07-12 14:35:43.778343 | orchestrator | 2025-07-12 14:35:43 - clean up routers 2025-07-12 14:35:43.838282 | orchestrator | 2025-07-12 14:35:43 - testbed 2025-07-12 14:35:44.845421 | orchestrator | ok: Runtime: 0:00:18.508312 2025-07-12 14:35:44.849599 | 2025-07-12 14:35:44.849794 | PLAY RECAP 2025-07-12 14:35:44.849940 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-12 14:35:44.850012 | 2025-07-12 14:35:44.998435 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 14:35:45.000796 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 14:35:45.819011 | 2025-07-12 14:35:45.819201 | PLAY [Cleanup play] 2025-07-12 14:35:45.836376 | 2025-07-12 14:35:45.836539 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 14:35:45.906599 | orchestrator | ok 2025-07-12 14:35:45.919692 | 2025-07-12 14:35:45.919910 | TASK [Set cloud fact (local deployment)] 2025-07-12 14:35:45.955160 | orchestrator | skipping: Conditional result was False 2025-07-12 14:35:45.969185 | 2025-07-12 14:35:45.969361 | TASK [Clean the cloud environment] 2025-07-12 14:35:47.112838 | orchestrator | 2025-07-12 14:35:47 - clean up servers 2025-07-12 14:35:47.582603 | orchestrator | 2025-07-12 14:35:47 - clean up keypairs 2025-07-12 14:35:47.602917 | orchestrator | 2025-07-12 14:35:47 - wait for servers to be gone 2025-07-12 14:35:47.647894 | orchestrator | 2025-07-12 14:35:47 - clean up ports 2025-07-12 14:35:47.723825 | orchestrator | 2025-07-12 14:35:47 - clean up volumes 2025-07-12 14:35:47.796255 | orchestrator | 2025-07-12 14:35:47 - disconnect routers 2025-07-12 14:35:47.825685 | orchestrator | 2025-07-12 14:35:47 - clean up subnets 2025-07-12 14:35:47.848721 | orchestrator | 2025-07-12 14:35:47 - clean up networks 2025-07-12 14:35:48.010174 | orchestrator | 2025-07-12 14:35:48 - clean up security groups 2025-07-12 14:35:48.053131 | orchestrator | 2025-07-12 14:35:48 - clean up floating ips 2025-07-12 14:35:48.078634 | orchestrator | 2025-07-12 14:35:48 - clean up routers 2025-07-12 14:35:48.509136 | orchestrator | ok: Runtime: 0:00:01.350635 2025-07-12 14:35:48.512312 | 2025-07-12 14:35:48.512443 | PLAY RECAP 2025-07-12 14:35:48.512531 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-12 14:35:48.512574 | 2025-07-12 14:35:48.638068 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 14:35:48.639074 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 14:35:49.378818 | 2025-07-12 14:35:49.379745 | PLAY [Base post-fetch] 2025-07-12 14:35:49.395617 | 2025-07-12 14:35:49.395770 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-12 14:35:49.452222 | orchestrator | skipping: Conditional result was False 2025-07-12 14:35:49.466376 | 2025-07-12 14:35:49.466581 | TASK [fetch-output : Set log path for single node] 2025-07-12 14:35:49.539328 | orchestrator | ok 2025-07-12 14:35:49.551135 | 2025-07-12 14:35:49.551333 | LOOP [fetch-output : Ensure local output dirs] 2025-07-12 14:35:50.016939 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/c330580256be49afbe62cc1d895a3b2b/work/logs" 2025-07-12 14:35:50.285184 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c330580256be49afbe62cc1d895a3b2b/work/artifacts" 2025-07-12 14:35:50.575093 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c330580256be49afbe62cc1d895a3b2b/work/docs" 2025-07-12 14:35:50.600180 | 2025-07-12 14:35:50.600416 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-12 14:35:51.532847 | orchestrator | changed: .d..t...... ./ 2025-07-12 14:35:51.533140 | orchestrator | changed: All items complete 2025-07-12 14:35:51.533191 | 2025-07-12 14:35:52.264458 | orchestrator | changed: .d..t...... ./ 2025-07-12 14:35:53.031261 | orchestrator | changed: .d..t...... ./ 2025-07-12 14:35:53.054031 | 2025-07-12 14:35:53.054173 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-12 14:35:53.559676 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.011013 2025-07-12 14:35:53.863617 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.009964 2025-07-12 14:35:53.876592 | 2025-07-12 14:35:53.876683 | PLAY RECAP 2025-07-12 14:35:53.876734 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-12 14:35:53.876779 | 2025-07-12 14:35:54.019984 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 14:35:54.022608 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 14:35:54.767085 | 2025-07-12 14:35:54.767338 | PLAY [Base post] 2025-07-12 14:35:54.782826 | 2025-07-12 14:35:54.782986 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-12 14:35:55.752148 | orchestrator | changed 2025-07-12 14:35:55.762519 | 2025-07-12 14:35:55.762722 | PLAY RECAP 2025-07-12 14:35:55.762869 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-12 14:35:55.762951 | 2025-07-12 14:35:55.889572 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 14:35:55.890537 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-12 14:35:56.701964 | 2025-07-12 14:35:56.702154 | PLAY [Base post-logs] 2025-07-12 14:35:56.713429 | 2025-07-12 14:35:56.713593 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-12 14:35:57.194994 | localhost | changed 2025-07-12 14:35:57.213585 | 2025-07-12 14:35:57.213863 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-12 14:35:57.243533 | localhost | ok 2025-07-12 14:35:57.247014 | 2025-07-12 14:35:57.247127 | TASK [Set zuul-log-path fact] 2025-07-12 14:35:57.265295 | localhost | ok 2025-07-12 14:35:57.281027 | 2025-07-12 14:35:57.281191 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 14:35:57.307676 | localhost | ok 2025-07-12 14:35:57.313325 | 2025-07-12 14:35:57.313499 | TASK [upload-logs : Create log directories] 2025-07-12 14:35:57.826562 | localhost | changed 2025-07-12 14:35:57.834627 | 2025-07-12 14:35:57.834917 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-12 14:35:58.382535 | localhost -> localhost | ok: Runtime: 0:00:00.008001 2025-07-12 14:35:58.386773 | 2025-07-12 14:35:58.386966 | TASK [upload-logs : Upload logs to log server] 2025-07-12 14:35:58.930524 | localhost | Output suppressed because no_log was given 2025-07-12 14:35:58.932964 | 2025-07-12 14:35:58.933087 | LOOP [upload-logs : Compress console log and json output] 2025-07-12 14:35:58.987140 | localhost | skipping: Conditional result was False 2025-07-12 14:35:58.993193 | localhost | skipping: Conditional result was False 2025-07-12 14:35:59.004729 | 2025-07-12 14:35:59.004994 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-12 14:35:59.052724 | localhost | skipping: Conditional result was False 2025-07-12 14:35:59.053328 | 2025-07-12 14:35:59.056612 | localhost | skipping: Conditional result was False 2025-07-12 14:35:59.070419 | 2025-07-12 14:35:59.070665 | LOOP [upload-logs : Upload console log and json output]